I would like to training a network using both CNN-LSTM Network ? Is this possible in Matlab

16 visualizaciones (últimos 30 días)
I have a image data and I use imageInputLayer as a input for the 2D Conv layer then I would like to use LSTM network. Is this possible to use in Matlab. Like the architecture below picture (found in some research paper image on Google). I have tried using this but unfortunatly not sucessfull. Can you please give some ideas how can we implement this.
layers = [ ...
%CNN
imageInputLayer([129 35 1])
sequenceInputLayer(inputSize,'Name','input')
convolution2dLayer(3,32,'Padding','same')
batchNormalizationLayer
reluLayer
maxPooling2dLayer(2,'Stride',2)
convolution2dLayer(3,32,'Padding','same')
batchNormalizationLayer
reluLayer
maxPooling2dLayer(2,'Stride',2)
convolution2dLayer(3,64,'Padding','same')
batchNormalizationLayer
reluLayer
maxPooling2dLayer(2,'Stride',2)
flattenLayer('Name','flatten')
%LSTM
lstmLayer(numHiddenUnits,'OutputMode','last','Name','lstm')
fullyConnectedLayer(numClasses, 'Name','fc')
softmaxLayer('Name','softmax')
classificationLayer('Name','classification')];
  2 comentarios
Ullah Nadeem
Ullah Nadeem el 23 de Jun. de 2023
The reply is too late though but it may help for next search.
My problem has been resolved by putting sequenceFoldingLayer right after imageInputLayer and sequenceUnfoldingLayer before flatten layer.
I think, it is because LSTM layers need sequential information to keep long-range dependencies.
Cheers~
Ben
Ben el 23 de Jun. de 2023
@Ullah Nadeem - thanks for replying, you're right that you need sequenceFoldingLayer and sequenceUnfoldingLayer when using trainNetwork for CNN-LSTM networks. We have this example that shows training an LSTM on CNN embeddings of video frames, the final network combines the CNN and LSTM for prediction using the sequence folding layers. We also have this example demonstrating training a CNN-LSTM on audio data.
Note that you need a sequenceInputLayer to input sequences of images into the CNN-LSTM network.
Also note that you do not need sequenceFoldingLayer or sequenceUnfoldingLayer when using convolution2dLayer in dlnetwork with sequences of images - by default the convolution2dLayer in dlnetwork will "distribute" over the sequence dimension on sequences of images. To train the dlnetwork you will need to use a custom training loop.

Iniciar sesión para comentar.

Respuestas (0)

Categorías

Más información sobre Image Data Workflows en Help Center y File Exchange.

Productos


Versión

R2022a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by