How to use the LSTM network to forecast the image?
53 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
Ning Liu
el 22 de Mayo de 2021
Comentada: Dieter Mayer
el 21 de Oct. de 2022
Hi, I have image time series datasets and each image size is 785*785*3, the time series length is 400. Now I want to establish a LSTM network to fit , is the image at time t and is the image at time t-1. I use the previrous 350 images to prepare the train data and the last 50 images to test the forecast results. I construct a simple LSTM network as follows,
layers = [ ...
sequenceInputLayer([785,785,3],'Name','input')
flattenLayer('Name','flatten')
lstmLayer(200,'OutputMode','sequence','Name','lstm1')
lstmLayer(200,'OutputMode','sequence','Name','lstm2')
fullyConnectedLayer(785*785*3,'Name','fc')
regressionLayer('routput')];
options = trainingOptions('adam', ...
'MaxEpochs',300, ...
'GradientThreshold',1, ...
'InitialLearnRate',1e-4, ...
'LearnRateSchedule','piecewise', ...
'LearnRateDropPeriod',125, ...
'LearnRateDropFactor',0.2, ...
'Verbose',0, ...
'Plots','training-progress');
% X_train is a cell with length of 350, and each cell element is a [785,785,3] double matrix
% y_train is a cell with length of 350, and each cell element is a [785*785*3,1] double matrix
net = trainNetwork(X_train,y_train,layers,options);
for i=1:length(X_train)
[net,YPred] = predictAndUpdateState(net,X_train{i},'ExecutionEnvironment','cpu');
end
% forecast
Ypred = zeros(785,785,3,50);
[net,YPred] = predictAndUpdateState(net,YPred,'ExecutionEnvironment','cpu');
Ypred(:,:,:,1) = reshape(YPred,785,785,3);
for i=2:length(X_test)
[net,YPred] = predictAndUpdateState(net,Ypred(:,:,:,i-1),'ExecutionEnvironment','cpu');
Ypred(:,:,:,i) = reshape(YPred,785,785,3);
end
I compare the forecast result Ypred with my test data X_test, I got an undesirable compare result. I think the reason is I designed an inappropriate LSTM network. Any comments and suggestions are welcome! Thanks a lot!
1 comentario
Dieter Mayer
el 21 de Oct. de 2022
Hi,
What you need is the use of a real convolutional LSTM (conv-LSTM). This is not a series of a convolutional layer, followed by an LSTM layer but a LSTM which internally uses convolutions instead of matrix-multiplications (see also Link)
A conv-LSTM recognizes and preserves spatial patterns, otherwise, each pixel is handled individually.
It seems that matlab does not offer conv-LSTMs yet, I really hope, that this type of layer is in development.
Another approach is to use the UNET-architecture to make image sequence to image sequence forecasts. Unets can be build up in matlab using convolutions (encode), transposed convolutions (decode) and concatenation layers (are connecting kernels between encoder and decoder blocks).
Respuesta aceptada
Prateek Rai
el 13 de Sept. de 2021
Editada: Prateek Rai
el 13 de Sept. de 2021
To my understanding, you are trying to use the LSTM Network to forecast the image. A possible workaround could be :
(a) Use CNN Layers to extract features from image. The final result of Convolution must be a 1-D Vector.
(b) Use this 1-D vector to feed it to LSTM which will again output a new 1-D Vector.
(c) Output of LSTM can be feeded to a set of transposed convolutional layers to retrieve the output image.
You can refer to Convolutional Neural Network MathWorks documentation page to find more on CNN and transposedConv2dLayer MathWorks documentation page to find more on transposed Convolutional layers. You can also refer to Long Short-Term Memory Networks MathWorks documentation page to find more on LSTM.
Más respuestas (0)
Ver también
Categorías
Más información sobre Image Data Workflows en Help Center y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!