- Shuffling the Validation Set: You mentioned shuffling the validation set before each epoch. This can lead to inconsistent evaluation metrics. The validation set should remain fixed throughout the training process to provide a consistent measure of the model's performance.
- Using Validation Set for Training: You mentioned processing the validation set through forward and backward steps to update the weights, which is not standard practice. The validation set should only be used to evaluate the model’s performance, not to train it. Use the validation set solely for evaluation. Do not perform backpropagation or weight updates using the validation set.
- Learning Rate and Hyperparameters: An inappropriate learning rate or other hyperparameters might cause unstable training. Experiment with different learning rates, batch sizes, and other hyperparameters.
Doubt about early stopping
9 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
Hi,
I'm trying to trainning an neural network, but i'm facing a doubt.
I know that during the trainning of the neural network the error of trainning and data sets decreases and then, after some epochs the error validation set starts to grow and this moment indicates that is better to stop the trainning like this classic picture:
But, in my case, my neural network is showwing this behavior:
where the red line is the error of the validation set and the blue line is the error of the trainnig set during the epochs.
Why this is happening ?
Just to be clear, in my algorithm to each epoch i am:
1) Shuffling the lines inside the validation and trainning set ( The lines of one set don't interfere on the lines of other, they are shuffling the lines inside themselves)
2) processing the validation set through the forward step calculating the error of this set and saving this error
3)processing the trainning set through the forward and backward steps (to update the weights) ,calculating the error of this set and saving this error.
4) if the error is smaller than a value that I can choice, the algorithm stops (In the case of the image above, it don't happens, but I already find some configurations of hyperparameter that reach this value that i can choice)
0 comentarios
Respuestas (1)
Satwik
el 31 de Jul. de 2024
Hi,
Based on your description, I understand that you are experiencing unusual behaviour in the validation error and training error. In a typical neural network training process, the training error consistently decreases, while the validation error initially decreases and then starts to increase, indicating overfitting. However, in your case, both the training error (blue line) and the validation error (red line) initially decrease and then both start to increase after some epochs. This suggests a few points to consider and correct in your training procedure:
By addressing these issues, you should be able to achieve more stable and predictable training behaviour, with the validation error and training error behaving as expected.
Hope this helps!
Ver también
Categorías
Más información sobre Deep Learning Toolbox en Help Center y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!