Why is my neural network performing worse as the number of hidden layers increases?

13 visualizaciones (últimos 30 días)
Hello, I am currently using the Matlab Neural Network toolbox to experiment with the Iris dataset. I am training with "trainlm" algorithm, and I decided to see what would happen if I trained with 1:20 hidden layers. I was not expecting any change in the classification error, but when I do this, I get the following output:
I have been looking for a solution, but I cannot explain why the classification error begins to jump, or even increases at all as the number of hidden layers increases.
Thank You

Respuesta aceptada

Greg Heath
Greg Heath el 2 de Ag. de 2015
The ultimate goal is too obtain a net that performs well on non-training data that comes from the same or similar source as the training data. This is called GENERALIZATION.
Frequent causes of failure are
1. Not enough weights to adequately characterize the training data
2. Training data does not adequately characterize the salient features of non-training data because of measurement error, transcription error, noise, interference, insufficient sampling size and variability
3. Fewer training equations than unknown weights.
4. Random weight initialization
Various techniques use to mitigate these causes are
1. Remove bad data and outliers (plots help)
2. Use enough training data to sufficiently characterize non-training data.
3. Use enough weights to adequately characterize the training data
4. Use more training equations than unknown weights. The stability of
solutions w.r.t. noise and errors increases as the ratio increases.
5. Use the best of multiple random initialization & data-division designs
6. K-fold Cross-validation
7. Validation Stopping
8. Regularization
For the iris_dataset
[ I N ] = size(input) % [ 4 150 ]
[ O N ] = size(target) % [ 3 150 ]
Assuming the default 0.7/0.15/0.15 trn/val/tst data division, the number of training equations is approximately
Ntrneq = 0.7*N*O % 315
Assuming the default I-H-O node topology, the number of unknown equations is
Nw = (I+1)*H+(H+1)*O = (I+O+1)*H + O
Obviously, Nw < Ntrneq when H <= Hub (upper bound) where
Hub = floor( (Ntrneq-O)/(I+O+1)) % 39
Expecting decent solutions for H <= 20 seems reasonable. However, to
mitigate the random initial weights and data division, design 10 nets for each value
I have posted zillions of examples in both the NEWSGROUP and ANSWERS. I use patternnet for classification.
Hope this helps.
Thank you for formally accepting my answer
Greg
  5 comentarios
bitslice
bitslice el 2 de Ag. de 2015
Also, I used only one node in each of these hidden layers.
bitslice
bitslice el 2 de Ag. de 2015
Ok, with your tips I was able to figure this out:
Changing the random seed does indeed change the consistency of the results, implying that the increased error is due to the random weight initialization.
Thank you so much!

Iniciar sesión para comentar.

Más respuestas (1)

Walter Roberson
Walter Roberson el 2 de Ag. de 2015
Each layer is initialized randomly. If you do not provide enough data to train the effects of the randomness out, then you have the effect of the cumulative randomness.
  3 comentarios
Walter Roberson
Walter Roberson el 2 de Ag. de 2015
Greg Heath has written several times about the amount of data that one should use, but I cannot think of good keywords at the moment to search for.

Iniciar sesión para comentar.

Categorías

Más información sobre Deep Learning Toolbox en Help Center y File Exchange.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by