dlconv inference with int8

2 visualizaciones (últimos 30 días)
David Eriksson
David Eriksson el 5 de Mzo. de 2024
Respondida: Avadhoot el 13 de Mzo. de 2024
Hi, is there a way to run inference (forward pass) with dlconv with int8 in the activations and float with the weights? Is it possible to make a CUDA model that I can run from matlab? Maybe as a mex function? Best, David

Respuestas (1)

Avadhoot
Avadhoot el 13 de Mzo. de 2024
Hi David,
From your question, I infer that you are trying to pass int8 activations to the "dlconv" function with floating point weights. This will not work because the "dlconv" function is designed to work with only floating point data types (single or double). So the int8 inputs must be converted to floating point numbers before passing them to the "dlconv" function.
A computationally intensive workaround is to implement the convolution operation manually in a custom CUDA kernel and then writing a MEX function to interface it with MATLAB. After that you can call the MEX function normally in MATLAB and pass the int8 data to it and it will handle the invocation of the CUDA kernel. Using this approach, you can use int8 activations in your convolution operation. This operation will entirely bypass "dlconv" as you will be writing a custom CUDA kernel to implement the convolution operation.
I hope this helps.

Categorías

Más información sobre Image Data Workflows en Help Center y File Exchange.

Productos


Versión

R2022b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by