- Validation dataset is easier to learn as compared to training dataset. So, check whether validation dataset follows same distribution as training dataset.
- Regularization: Dropout is applied during training only. It helps in achieving better generalization on unseen datasets.
Would this be considered underfitting?
14 views (last 30 days)
Lucas Ferreira-Correia on 31 Aug 2020
Training an LSTM (with 410 datasets) to simulate the response of a system.
Network settings are as follows:
layer = [
sequenceInputLayer(3,"Name","Sequential Input Layer")
fullyConnectedLayer(50,"Name","Fully Connected Layer")
fullyConnectedLayer(1,"Name","Fully Connected Layer2")
regressionLayer("Name","Regression Output Layer")];
When training, the following learning curve is shown. The training and validation RMSE never converge and remain offset.
Does this indicate underfitting? If not what am I looking at, and is it acceptable?
Thank you in advance!
Anshika Chaurasia on 3 Sep 2020
It is my understanding that you want to know whether your model is underfit or if it is not, then why training and validation loss are not converging.
“Underfitting occurs when the model is not able to obtain a sufficiently low error value on the training set.” – Deep Learning, by Ian Goodfellow
On seeing the graph, training and validation loss curves have low values. So, we can say model is not underfit.
In graph, validation loss is less than training loss because of the following reasons:
The reason for both validation and training never converge and remain offset could be that the model is not learning after certain epochs. You could try to experiment with hyperparameters like learning rate, no. of layers, dropout layer probability etc.