Scaled Conjugate Gradient - NN toolbox
Mostrar comentarios más antiguos
Hi,
I have used MATLAB's 'trainscg' with 'mse' as the performance function and NETLAB's 'scg' with 'mse' as the performance function for the same training data set and still don't obtain the same generalisation on a set of other data files I have.
I have used same the same Nguyen Widrow initialisation method for weight and bias initialisation. Used the same 'dividerand' method to split the data sets into training, validation and testing data.
I know the difference could be in the various parameters used. In the original paper, http://www.sciencedirect.com/science/article/pii/S0893608005800565; the lambda values are specified not as exact values but as inequalities. I have used values that don't violate the rules laid down by the author.
Also, one thing that seems a bit bizarre to me is that MATLAB stops the learning in just 23 epochs but NETLAB exceeds maximum iterations. I understand stopping criteria may be different.
Is there anyone there who has worked on both of these toolboxes and found a way of establishing same results from both of them? I want some general ideas and tips to making SCG give similar results to MATLAB's TRAINSCG.
Any help, advise will be greatly appreciated.
Thank you. Pooja
1 comentario
Pooja Narayan
el 12 de Ag. de 2014
Respuesta aceptada
Más respuestas (1)
saba momeni
el 1 de Feb. de 2019
0 votos
Hi everyone
I am training my feedfoward neural network. with scale conjugate gradient.
I am not sure that scale conjugate gradient dose optimization in bach or with mini-batch training?
I just specify the Lambada and the Sigma for it , no size of batch.
I appreciate your answer.
Cheers
S
Categorías
Más información sobre Deep Learning Toolbox en Centro de ayuda y File Exchange.
Productos
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!