Borrar filtros
Borrar filtros

why my regression plot is inverse

2 visualizaciones (últimos 30 días)
hamed
hamed el 26 de Ag. de 2014
Comentada: Greg Heath el 21 de Sept. de 2014
Hello guys i use nftool for classification some data .
why my regression plot is inverse ???????
what should i do ?

Respuesta aceptada

Greg Heath
Greg Heath el 28 de Ag. de 2014
Editada: Greg Heath el 28 de Ag. de 2014
Oh! You you have a classifier with {0,1} targets!
You are probably using the wrong function. Use patternnet (not fitnet or feedforwardnet). Then instead of getting innapropriate regression plots you will get confusion and roc plots.
Compare the three results from the three different functions
net = patternnet; % Case 1 Classification and Pattern Recognition
%net = fitnet; % Case 2 Regression and Curve-fitting
%net = feedforwardnet; % Case 3 Regression and Curve-fitting
plotFcns = net.plotFcns % No semicolon
Hope this helps.
Thank you for formally accepting my answer
Greg

Más respuestas (2)

Sean de Wolski
Sean de Wolski el 26 de Ag. de 2014
You've likely overfit your model to your data. Try changing the amount of data in each of the fields (less in training, more in validation and test) to avoid overfitting.
  1 comentario
Greg Heath
Greg Heath el 21 de Sept. de 2014
To avoid over-fitting use one or more of the following
1. Reduce the number of hidden nodes
2. INCREASE the amount of training data
For regression include
3. use msereg instead of mse
4. Use trainbr instead of trainlm
However, for classification (trainscg,crossentropy) I'm not sure how to modify 3 and 4.
Nonetheless, overfitting may not be your problem.
Try 10 or more designs with different initial weights. See my 27 Aug answer.

Iniciar sesión para comentar.


Greg Heath
Greg Heath el 27 de Ag. de 2014
Default solutions depend on random trn/val/tst data divisions, random weight initializations and the choice of the number of hidden nodes (H=10 is the default). Therefore, it is often necessary to choose the 'best' of multiple designs.
My personal approach is to try 10 random initialization for each trial value of H smaller than the upperbound Hub = -1+ceil((Ntrn*O-O)/(I+O+1)). This makes sure that there are more training equations Ntrneq=Ntrn*O than unknown weights Nw = (I+1)*H+(H+1)*O.
For details search for my examples using
greg Nw Ntrials
Hope this helps.
Thank you for formally accepting my answer
Greg

Categorías

Más información sobre Sequence and Numeric Feature Data Workflows en Help Center y File Exchange.

Etiquetas

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by