Early Stopping for Deep Networks

12 visualizaciones (últimos 30 días)
Roberto
Roberto el 15 de En. de 2019
Editada: Greg Heath el 19 de En. de 2019
Hi everyone,
just a quick question.
How can I stop the training of a deep network (LSTM for instance) in order to have weights and biases set accordingly with the minimum of the validation loss?
In other words what's the reason of having a validation set if the final network is NOT the one that minimize the validation loss because it's overtrained in any case?
Validation Patience parameter is not useful in this sense because it stops the training when it's too late and setting it too small could result in being stuck in a local minimum.
The only way I found is repeating the training with max epochs set where the minimum of validation loss in the first training is reached but it's a crazy solution...
Any idea?
Thanks

Respuestas (1)

Greg Heath
Greg Heath el 16 de En. de 2019
Editada: Greg Heath el 16 de En. de 2019
It is not clear to me that, based on a random 15% of the data, this is a better choice. It would be interesting to make a formal comparison based on multiple runs with different random number seeds using multiple data sets.
I believe that a more important point is to try to minimize the number of hidden nodes subject to an upper bound on the training set error rate.
Hope this helps
Thank you for formally accepting tmy answer
Greg
  2 comentarios
Roberto
Roberto el 17 de En. de 2019
Not sure to understand. You mean that L2 regularization could outperform early stopping?
In my opinion it's too related to the dataset to make a formal comparison, but in order to compare the two methods we still need a way to early stop the training in deep learning toolbox...
Greg Heath
Greg Heath el 19 de En. de 2019
Editada: Greg Heath el 19 de En. de 2019
No. That is not what I meant.
HOWEVER
Any decent method will outperform others depending on the data set.
My shallow net double loop procedure (MANY examples in NEWSGROUP and ANSWERS) has been successfull for decades.
  1. Single hidden layer
  2. Outer loop over number of hidden nodes H = 0:dH:Hmax
  3. Inner loop over random intial weights
I have not tried it on deep nets but am interested if anyone else has.
Greg.

Iniciar sesión para comentar.

Categorías

Más información sobre Sequence and Numeric Feature Data Workflows en Help Center y File Exchange.

Productos


Versión

R2018b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by