Quantization Workflow Prerequisites
This table lists the products required to quantize and deploy deep learning networks.
Execution Environment | ||||
Development Host Requirements | FPGA | GPU | CPU | MATLAB |
Setup Toolkit Environment |
The quantization
workflow does not support the For a list of supported compilers, see Supported and Compatible Compilers. | Setting Up the Prerequisite Products (GPU Coder) The quantization workflow does not support the
For a list of supported compilers, see Supported and Compatible Compilers. | Prerequisites for Deep Learning with MATLAB Coder (MATLAB Coder) The quantization workflow does not support the
For a list of supported compilers, see Supported and Compatible Compilers. Only Raspberry Pi™ with ARM® v7 architecture is supported. The ARM Compute Library version 20.02.1 for deep learning quantized inference. | The quantization workflow does not support the For a list of supported compilers, see Supported and Compatible Compilers. |
Required Products |
| Deep Learning Toolbox |
| Deep Learning Toolbox |
Required Support Packages |
| Deep Learning Toolbox Model Quantization Library | Deep Learning Toolbox Model Quantization Library | Deep Learning Toolbox Model Quantization Library |
Required Add Ons | MATLAB® Coder™ Interface for Deep Learning Libraries |
| MATLAB Coder Interface for Deep Learning Libraries | MATLAB Coder Interface for Deep Learning Libraries |
Supported Networks and Layers | Supported Networks, Layers, Boards, and Tools (Deep Learning HDL Toolbox) | Supported Networks, Layers, and Classes (GPU Coder) | Networks and Layers Supported for Code Generation (MATLAB Coder) | Networks and Layers Supported for Code Generation (MATLAB Coder) Note When MATLAB is the Execution Environment, only the layers for the Intel MKL-DNN deep learning library are supported. |
Deployment | Deep Learning HDL Toolbox | GPU Coder For CUDA code generation, the software
generates code for a convolutional deep neural network by quantizing the
weights, biases, and activations of the convolution layers to 8-bit
scaled integer data types. The quantization is performed by providing
the calibration result file produced by the Code generation does not support quantized deep
neural networks produced by the | MATLAB Coder For C/C++ and CUDA code generation, the
software generates code for a convolutional deep neural network by
quantizing the weights, biases, and activations of the convolution
layers to 8-bit scaled integer data types. The quantization is performed
by providing the calibration result file produced by the Code generation does not support quantized deep
neural networks produced by the Note Before validation, you must create a | None |