Using different learning algorithms for the neural net toolkit

How do I go about implementing a genetic algorithm (for example) tool to minimise the weights for a neural network to find the global maxima? I'm worried that the built in trainers are not adequate to find the global minima.

 Respuesta aceptada

Greg Heath
Greg Heath el 28 de Jul. de 2016
Editada: Greg Heath el 5 de Ag. de 2016
You may be worrying about the wrong thing. With a typical I-H-O FFnet the number of equivalent nets obtained by just changing weight signs and index order is ~
2^H * factorial(H) (= 3.7159e+09 for the
default H=10)
I find the best bet is to just find one of the many nets that MINIMIZE THE NUMBER OF HIDDEN NODES subject to the following maximum bound on mean-square-error.
MSE = mse(error) <= 0.001*MSE00
where
error = target - output;
MSE00 = mean(var(target',1)) % Average target variance
The resulting bounds on normalized MSE and Rsquare (Google R squared) are
NMSE = MSE/MSE00 <= 0.001
Rsq = 1 - NMSE >= 0.999
which is interpreted as successfully modelling more than 99.9% of the target variance.
Initial weights are random. Therefore it is wise to make a double loop search over number of hidden nodes and initial random number states. I have posted zillions of examples in both the NEWSGROUP and ANSWERS. Good search words are
greg Hmin Hmax Ntrials
If you insist on using a genetic algorithm see my post
Hope this helps.
Thank you for formally accepting my answer
Greg

Más respuestas (0)

Categorías

Más información sobre Deep Learning Toolbox en Centro de ayuda y File Exchange.

Preguntada:

el 28 de Jul. de 2016

Editada:

el 5 de Ag. de 2016

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by