How to access the INT8 quantized weights in the deep learning quantizer?

2 visualizaciones (últimos 30 días)
Yousef
Yousef el 14 de Oct. de 2025
Respondida: Dor Rubin el 14 de Oct. de 2025
I have quantized Resnet 18 using the deeplearning quantizer toolbox. The idea is that I want to deploy it on an FPGA. The quantized process was successful and the model size was compressed to 10MB. However, I want to see the quantized INT8 weights. How do do I access it in the terminal? I can only see the floating point values.
Below is my code and the output in the terminal:
code
% Save the network temporarily to calculate size
save('quantizedNet.mat', 'quantizedNet');
fileInfo = dir('quantizedNet.mat');
netSizeMB = fileInfo.bytes / (1024^2);
fprintf('Quantized Network Size: %.2f MB\n', netSizeMB);
%% Network archieture view
%%analyzeNetwork(quantizedNet)
%% Quantizer details
% Choose layer and parameter
Layers=qDetails.QuantizedLearnables
conv1weight=Layers.QuantizedLearnables.Value(1)
conv1weight
output

Respuestas (1)

Dor Rubin
Dor Rubin el 14 de Oct. de 2025
Hi Yousef,
You can access the integer representation by using the storedInteger method on the fi value. For example:
storedInteger(conv1weight)
Thanks,
Dor

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by