Why is the data type not unified for custom training loops (dlarray) and internal training loops (array) in deep learning?

3 visualizaciones (últimos 30 días)
[XTrain,TTrain] = japaneseVowelsTrainData;
inputSize = 12;
numHead = 10;
numHiddenUnits = 100;
numClasses = 9;
embeddingDimension = 50; %
numWords = 200 ;
layers = [
sequenceInputLayer(inputSize)
batchNormalizationLayer
peepholeLSTMLayer(numHiddenUnits,inputSize,OutputMode="last")
% lstmLayer(numHiddenUnits,'OutputMode','last')
batchNormalizationLayer
fullyConnectedLayer(numClasses)
softmaxLayer
classificationLayer];
for lstmLayer, the data type of forward function is array:
for peepholeLSTMLayer which is a custom defined layer, the data type of forward (predict) function is dlarray:
Why is the data type not unified for custom training loops (dlarray) and internal training loops (array) in deep learning?
It brings some trouble and inconvenience and I think it leads to corpulent as well.
What is puzzling is that: for internal layers (lstmLayer), there is no layer validating with auto-generated example inputs and forward function is used during training, however for user-defined layers, there is layer validating with auto-generated example inputs and predict but not forward function is used. Why is there the difference?
I think the deep learning tolbox of matlab is over-staffed, it is inconvenient and complicated for implementing deep leaning functions but should be concise and plain.

Respuestas (1)

arushi
arushi el 2 de Sept. de 2024
Hi Jack,
The disparity in data types between custom training loops (dlarray) and internal training loops (array) in deep learning can be attributed to the following reasons:
  1. Flexibility and Compatibility: The dlarray data type offers flexibility by allowing users to perform computations on different types of hardware.
  2. Efficiency and Performance: Internal training loops often use array data types optimized for the underlying hardware and software frameworks. These types are often tailored for efficient execution of deep learning operations, which may not be fully compatible with the dlarray type.
  3. Framework-Specific Implementations: Different deep learning frameworks have their own internal representations for data. This lead to differences in data types used within custom and internal training loops.
I hope it helps!

Categorías

Más información sobre Image Data Workflows en Help Center y File Exchange.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by