Is there any downside of pruning your neural network?
4 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
Ghazi Binarandi
el 26 de Feb. de 2016
Respondida: Greg Heath
el 27 de Feb. de 2016
Hi all,
As far as I know, pruning (syntax: prune) removes any of your inputs, layers, and outputs with sizes of zero. On the other words, pruning makes your network more efficient. However, is there any downside of pruning your neural network?
Thanks,
Ghazi
0 comentarios
Respuesta aceptada
Greg Heath
el 27 de Feb. de 2016
Nets with one hidden layer can be universal approximators.
From 1979 to 2003 I used FORTRAN to design 2 class classifiers. They had a single RBF hidden layer with nodes connected to one (not both) of two outputs.
The number of hidden nodes was dependent on how many types of radar missile target data was available.
Over the past 12 years of retirement, 99.9% of my designs (regression, classification and time-series) are fully connected MATLAB functions with a single hidden layer.
Typically, my designs automatically search for the minimum number of hidden nodes that will yield the desired error rate.
I have posted zillions of examples in both the NEWSGROUP and ANSWERS. Just search including the phrase
NEURAL GREG
As far as choosing inputs, if there aren't too many, I can tolerate a few ineffective ones. Otherwise, I just use STEPWISEFIT on a Linear Model for input variable selection.
I have used other more sophisticated NN techniques like PCA, PLS, forward-search and backward-search. In general, however, the Linear Model feature selection has been fast and satisfactory.
Hope this helps.
Thank you for formally accepting my answer
Greg
0 comentarios
Más respuestas (0)
Ver también
Categorías
Más información sobre Deep Learning Toolbox en Help Center y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!