estimateNetworkMetrics
Syntax
Description
returns a table containing these estimated layer-wise metrics for a deep neural network:dataTable
= estimateNetworkMetrics(net
)
LayerName
— Name of layerLayerType
— Type of layerNumberOfLearnables
— Number of non-zero learnable parameters (weights and biases) in the networkNumberOfOperations
— Total number of multiplications and additionsParameterMemory (MB)
— Memory required to store all of the learnable parametersNumberOfMACs
— Number of multiply-accumulate operationsArithmeticIntensity (FLOP/B)
— Amount of reuse of data fetched from memory, measured as the number of floating-point operations performed per the bytes of memory access required to support those operations. For example, convolutional layers reuse the same weight data across computations for multiple input features, resulting in a relatively high arithmetic intensity.
This function estimates metrics for learnable layers, which have weights and bias, in the network. Estimated metrics are provided for the following supported layers.
[
returns metrics for multiple networks.dataTable1,dataTable2,…,dataTableN
] = estimateNetworkMetrics(net1,net2,…,netN
)
This function requires Deep Learning Toolbox Model Quantization Library. To learn about the products required to quantize a deep neural network, see Quantization Workflow Prerequisites.
Examples
Estimate Metrics for Neural Network Layers
This example shows how to estimate layer-wise metrics for a neural network.
Load the pretrained network. net
is a SqueezeNet convolutional neural network that has been retrained using transfer learning to classify images in the MerchData
data set.
load squeezedlnetmerch
net
net = dlnetwork with properties: Layers: [67×1 nnet.cnn.layer.Layer] Connections: [74×2 table] Learnables: [52×3 table] State: [0×3 table] InputNames: {'data'} OutputNames: {'prob'} Initialized: 1 View summary with summary.
Use the estimateNetworkMetrics
function to estimate metrics for the supported layers in your network.
estNet = estimateNetworkMetrics(net)
estNet=26×7 table
LayerName LayerType NumberOfLearnables NumberOfOperations ParameterMemory (MB) NumberOfMACs ArithmeticIntensity
__________________ _________________ __________________ __________________ ____________________ ____________ ___________________
"conv1" "2-D Convolution" 1792 4.413e+07 0.0068359 2.2065e+07 25.739
"fire2-squeeze1x1" "2-D Convolution" 1040 6.4225e+06 0.0039673 3.2113e+06 12.748
"fire2-expand1x1" "2-D Convolution" 1088 6.4225e+06 0.0041504 3.2113e+06 12.748
"fire2-expand3x3" "2-D Convolution" 9280 5.7803e+07 0.0354 2.8901e+07 111.12
"fire3-squeeze1x1" "2-D Convolution" 2064 1.2845e+07 0.0078735 6.4225e+06 14.158
"fire3-expand1x1" "2-D Convolution" 1088 6.4225e+06 0.0041504 3.2113e+06 12.748
"fire3-expand3x3" "2-D Convolution" 9280 5.7803e+07 0.0354 2.8901e+07 111.12
"fire4-squeeze1x1" "2-D Convolution" 4128 6.4225e+06 0.015747 3.2113e+06 24.791
"fire4-expand1x1" "2-D Convolution" 4224 6.4225e+06 0.016113 3.2113e+06 24.791
"fire4-expand3x3" "2-D Convolution" 36992 5.7803e+07 0.14111 2.8901e+07 178.07
"fire5-squeeze1x1" "2-D Convolution" 8224 1.2845e+07 0.031372 6.4225e+06 27.449
"fire5-expand1x1" "2-D Convolution" 4224 6.4225e+06 0.016113 3.2113e+06 24.791
"fire5-expand3x3" "2-D Convolution" 36992 5.7803e+07 0.14111 2.8901e+07 178.07
"fire6-squeeze1x1" "2-D Convolution" 12336 4.8169e+06 0.047058 2.4084e+06 33.51
"fire6-expand1x1" "2-D Convolution" 9408 3.6127e+06 0.035889 1.8063e+06 32.109
"fire6-expand3x3" "2-D Convolution" 83136 3.2514e+07 0.31714 1.6257e+07 125.07
⋮
Compare Metrics for Floating-Point and Quantized Neural Network
This example shows how to estimate the metrics for a floating-point and quantized neural network.
Load the pretrained network. net is a SqueezeNet convolutional neural network that has been retrained using transfer learning to classify images in the MerchData
data set.
load squeezedlnetmerch
net
net = dlnetwork with properties: Layers: [67×1 nnet.cnn.layer.Layer] Connections: [74×2 table] Learnables: [52×3 table] State: [0×3 table] InputNames: {'data'} OutputNames: {'prob'} Initialized: 1 View summary with summary.
Unzip and load the MerchData
images as an image datastore. Define an augmentedImageDatastore
object to resize the data for the network, and split the data into calibration and validation data sets to use for quantization.
unzip('MerchData.zip'); imds = imageDatastore('MerchData', ... 'IncludeSubfolders',true, ... 'LabelSource','foldernames'); [calData, valData] = splitEachLabel(imds, 0.7,'randomized'); aug_calData = augmentedImageDatastore([227 227],calData); aug_valData = augmentedImageDatastore([227 227],valData);
Create a dlquantizer
object and specify the network to quantize. Set the execution environment to MATLAB. When you use the MATLAB execution environment, quantization is performed using the fi
fixed-point data type which requires a Fixed-Point Designer™ license.
quantObj = dlquantizer(net,'ExecutionEnvironment','MATLAB');
Use the calibrate function to exercise the network with sample inputs and collect range information.
calResults = calibrate(quantObj,aug_calData);
Use the quantize method to quantize the network object and return a simulatable quantized network.
qNet = quantize(quantObj)
qNet = Quantized dlnetwork with properties: Layers: [67×1 nnet.cnn.layer.Layer] Connections: [74×2 table] Learnables: [52×3 table] State: [0×3 table] InputNames: {'data'} OutputNames: {'prob'} Initialized: 1 View summary with summary. Use the quantizationDetails function to extract quantization details.
Use the estimateNetworkMetrics
function to estimate metrics for the floating-point and quantized networks.
[dataTableFloat,dataTableQuantized] = estimateNetworkMetrics(net,qNet)
dataTableFloat=26×7 table
LayerName LayerType NumberOfLearnables NumberOfOperations ParameterMemory (MB) NumberOfMACs ArithmeticIntensity
__________________ _________________ __________________ __________________ ____________________ ____________ ___________________
"conv1" "2-D Convolution" 1792 4.413e+07 0.0068359 2.2065e+07 25.739
"fire2-squeeze1x1" "2-D Convolution" 1040 6.4225e+06 0.0039673 3.2113e+06 12.748
"fire2-expand1x1" "2-D Convolution" 1088 6.4225e+06 0.0041504 3.2113e+06 12.748
"fire2-expand3x3" "2-D Convolution" 9280 5.7803e+07 0.0354 2.8901e+07 111.12
"fire3-squeeze1x1" "2-D Convolution" 2064 1.2845e+07 0.0078735 6.4225e+06 14.158
"fire3-expand1x1" "2-D Convolution" 1088 6.4225e+06 0.0041504 3.2113e+06 12.748
"fire3-expand3x3" "2-D Convolution" 9280 5.7803e+07 0.0354 2.8901e+07 111.12
"fire4-squeeze1x1" "2-D Convolution" 4128 6.4225e+06 0.015747 3.2113e+06 24.791
"fire4-expand1x1" "2-D Convolution" 4224 6.4225e+06 0.016113 3.2113e+06 24.791
"fire4-expand3x3" "2-D Convolution" 36992 5.7803e+07 0.14111 2.8901e+07 178.07
"fire5-squeeze1x1" "2-D Convolution" 8224 1.2845e+07 0.031372 6.4225e+06 27.449
"fire5-expand1x1" "2-D Convolution" 4224 6.4225e+06 0.016113 3.2113e+06 24.791
"fire5-expand3x3" "2-D Convolution" 36992 5.7803e+07 0.14111 2.8901e+07 178.07
"fire6-squeeze1x1" "2-D Convolution" 12336 4.8169e+06 0.047058 2.4084e+06 33.51
"fire6-expand1x1" "2-D Convolution" 9408 3.6127e+06 0.035889 1.8063e+06 32.109
"fire6-expand3x3" "2-D Convolution" 83136 3.2514e+07 0.31714 1.6257e+07 125.07
⋮
dataTableQuantized=26×7 table
LayerName LayerType NumberOfLearnables NumberOfOperations ParameterMemory (MB) NumberOfMACs ArithmeticIntensity
__________________ _________________ __________________ __________________ ____________________ ____________ ___________________
"conv1" "2-D Convolution" 1792 4.413e+07 0.001709 2.2065e+07 25.739
"fire2-squeeze1x1" "2-D Convolution" 1040 6.4225e+06 0.00099182 3.2113e+06 12.748
"fire2-expand1x1" "2-D Convolution" 1088 6.4225e+06 0.0010376 3.2113e+06 12.748
"fire2-expand3x3" "2-D Convolution" 9280 5.7803e+07 0.0088501 2.8901e+07 111.12
"fire3-squeeze1x1" "2-D Convolution" 2064 1.2845e+07 0.0019684 6.4225e+06 14.158
"fire3-expand1x1" "2-D Convolution" 1088 6.4225e+06 0.0010376 3.2113e+06 12.748
"fire3-expand3x3" "2-D Convolution" 9280 5.7803e+07 0.0088501 2.8901e+07 111.12
"fire4-squeeze1x1" "2-D Convolution" 4128 6.4225e+06 0.0039368 3.2113e+06 24.791
"fire4-expand1x1" "2-D Convolution" 4224 6.4225e+06 0.0040283 3.2113e+06 24.791
"fire4-expand3x3" "2-D Convolution" 36992 5.7803e+07 0.035278 2.8901e+07 178.07
"fire5-squeeze1x1" "2-D Convolution" 8224 1.2845e+07 0.007843 6.4225e+06 27.449
"fire5-expand1x1" "2-D Convolution" 4224 6.4225e+06 0.0040283 3.2113e+06 24.791
"fire5-expand3x3" "2-D Convolution" 36992 5.7803e+07 0.035278 2.8901e+07 178.07
"fire6-squeeze1x1" "2-D Convolution" 12336 4.8169e+06 0.011765 2.4084e+06 33.51
"fire6-expand1x1" "2-D Convolution" 9408 3.6127e+06 0.0089722 1.8063e+06 32.109
"fire6-expand3x3" "2-D Convolution" 83136 3.2514e+07 0.079285 1.6257e+07 125.07
⋮
Compare the parameter memory requirements of the layers supported by estimateNetworkMetrics
for the floating-point and quantized networks.
totalMemoryFloat = sum(dataTableFloat.("ParameterMemory (MB)")); totalMemoryQuantized = sum(dataTableQuantized.("ParameterMemory (MB)")); percentReduction = (totalMemoryFloat - totalMemoryQuantized)*100/totalMemoryFloat
percentReduction = 75
Input Arguments
net
— Neural network
dlnetwork
object | DAGNetwork
object | SeriesNetwork
object | taylorPrunableNetwork
object | RegressionNeuralNetwork
object | CompactRegressionNeuralNetwork
object | ClassificationNeuralNetwork
object | CompactClassificationNeuralNetwork
object
Neural network, specified as one of these values:
dlnetwork
objectDAGNetwork
objectSeriesNetwork
objecttaylorPrunableNetwork
objectRegressionNeuralNetwork
(Statistics and Machine Learning Toolbox) objectCompactRegressionNeuralNetwork
(Statistics and Machine Learning Toolbox) objectClassificationNeuralNetwork
(Statistics and Machine Learning Toolbox) objectCompactClassificationNeuralNetwork
(Statistics and Machine Learning Toolbox) object
estimateNetworkMetrics
supports both floating-point and quantized
networks.
net1,net2,…,netN
— Neural networks
dlnetwork
object | DAGNetwork
object | SeriesNetwork
object | taylorPrunableNetwork
object | RegressionNeuralNetwork
object | CompactRegressionNeuralNetwork
object | ClassificationNeuralNetwork
object | CompactClassificationNeuralNetwork
object
Neural networks, specified as a comma-separated list of any of the following values:
dlnetwork
objectsDAGNetwork
objectsSeriesNetwork
objectstaylorPrunableNetwork
objectsRegressionNeuralNetwork
(Statistics and Machine Learning Toolbox) objectsCompactRegressionNeuralNetwork
(Statistics and Machine Learning Toolbox) objectsClassificationNeuralNetwork
(Statistics and Machine Learning Toolbox) objectsCompactClassificationNeuralNetwork
(Statistics and Machine Learning Toolbox) objects
estimateNetworkMetrics
supports both floating-point and quantized
networks.
Version History
Introduced in R2022a
See Also
Apps
Functions
Topics
Comando de MATLAB
Ha hecho clic en un enlace que corresponde a este comando de MATLAB:
Ejecute el comando introduciéndolo en la ventana de comandos de MATLAB. Los navegadores web no admiten comandos de MATLAB.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list:
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)