CNN-LSTM validation data underperforming compared to training data
13 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
James Lu
el 4 de Feb. de 2022
Comentada: Saran Raj Research Scholar
el 19 de Mzo. de 2023
Hi all,
I am working on a CNN-LSTM for classifying audio spectrograms. I am having an issue where, during training, my training data curve performs very well (accuracy increases fast and converges to ~100%, loss decreases quickly and converges to ~0). However, my validation curve struggles (accuracy remains around 50% and loss slowly increases). I have run this several times, randomly choosing the training and validation data sets. I also included a dropout layer after LSTM layer. Hence, I am convinced the odd behavior isn't from data anomolies or overfitting. A screenshot is shown below.
When running classify() using the trained network and validation data, does MATLAB run the validation data through my convolution layers? If not, I suspect it is attempting to classify data that isn't convolved despite being trained on convolved spectrograms. This would explain the stark contrast between the training and validation curves.
If classify() does run validation data through my convolutional layers, the network would indeed be working with data it is trained on and giving poor results, which may indicate I am overfitting somehow. However, I have no way of verifying these suspicions.
Thank you for your help. My CNN-LSTM code is given below for reference.
numHiddenUnits1 = 200;
numClasses = 2;
inputSize = [683 lengthMax 1];
layers = [
% input matrix of spectrogram values
sequenceInputLayer(inputSize,"Name","sequence")
sequenceFoldingLayer("Name","fold");
% convolutional layers
convolution2dLayer([5 5],10,"Name","conv1","Stride",[2 1])
reluLayer("Name","relu1")
maxPooling2dLayer([5 5],"Name","maxpool1","Padding","same","Stride",[2 1])
convolution2dLayer([5 5],10,"Name","conv2","Stride",[2 1])
reluLayer("Name","relu2")
maxPooling2dLayer([5 5],"Name","maxpool2","Padding","same","Stride",[2 1])
convolution2dLayer([3 3],1,"Name","conv3","Padding",[1 1 1 1])
reluLayer("Name","relu3")
maxPooling2dLayer([2 2],"Name","maxpool3","Padding","same","Stride",[2 1]);
% unfold and feed into LSTM
sequenceUnfoldingLayer("Name","unfold")
flattenLayer("Name","flatten")
bilstmLayer(numHiddenUnits1,"Name","bilstm","OutputMode","last")
dropoutLayer(0.4,"Name","dropout")
fullyConnectedLayer(numClasses,"Name","fc")
softmaxLayer("Name","softmax")
classificationLayer("Name","classoutput");
];
lgraph = layerGraph(layers);
lgraph = connectLayers(lgraph,'fold/miniBatchSize','unfold/miniBatchSize');
% Training
maxEpochs = 200;
learningRate = 0.001;
miniBatchSize = 15; % is this needed?
options = trainingOptions('sgdm', ...
'ExecutionEnvironment', 'gpu', ...
'GradientThreshold', 1, ...
'MaxEpochs' ,maxEpochs, ...
'miniBatchSize',miniBatchSize,...
'SequenceLength', 'longest', ...
'Verbose', 0, ...
'ValidationData', {xVal, yVal}, ...
'ValidationFrequency', 30, ...
'InitialLearnRate', learningRate, ...
'Plots', 'training-progress',...
'Shuffle', 'every-epoch');
net = trainNetwork(xTrain, yTrain, lgraph, options);
0 comentarios
Respuesta aceptada
yanqi liu
el 7 de Feb. de 2022
yes,sir,may be add some dropoutLayer、batchNormalizationLayer to make model more generic
if possible,may be upload your data to debug
Más respuestas (2)
Joss Knight
el 7 de Feb. de 2022
I don't know why your model seems to be overfitting, but I can confirm that your validation data is being run through the exact same network as your training data.
1 comentario
Saran Raj Research Scholar
el 19 de Mzo. de 2023
Sir , if we build our model like CNN-LSTM-Fc-Softmax.Is Validation process of the given model is taking place as like normal cnn model? If not please tell how the training and validation process takes place.
Imola Fodor
el 8 de Abr. de 2022
Editada: Imola Fodor
el 8 de Abr. de 2022
i had a similar problem, and it was because generating a particular spectrogram within a parfor loop and in a for loop give different "images".. i have generated the spectrograms in parfor for the training/validation set (since there were a lot) and in for for the test set (obviously less).. for the test i did not have good results whileas for the validation yes.. i was advised from the mathworks team to read in parallel the audiofile and then visualize the spectrogram, using PARFEVAL.
i would also recommend to check whether you prepared your audio inserts in the same way for training and validation (normalization etc.)
0 comentarios
Ver también
Categorías
Más información sobre Image Data Workflows en Help Center y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!