Yet another ANN:net.numInputs / target / weight estimates question
1 visualización (últimos 30 días)
Mostrar comentarios más antiguos
Dear all,
I am trying to code a simple, supervised feed-forward network using the MATLAB net=network(...) function rather than the net=feedforwardnet(...) one. The basic architecture of this network use 7 (1-dimensional) features (let's say: size, weight, colour, etc.) with one hidden layer of n neurons, and try to predict one output. Two biases are added: one to input and one to the outputs. Inputs are normalized (mapminmax(...)) while ouputs are not.
I use the following code:
nb_input_sources = 7;
nb_layers = 2;
data_size = size(DBase,1); %where DBase is a 233x10 matrix containing all the data (features vectors + output).
net = network(nb_inputs_sources, nb_layers, [1;1], [ones(1,nb_inputs_sources); zeros(1,nb_inputs_sources)], [0,0; 1,0], [0,1]);
net.layers{1}.transferFcn = 'tansig'; %transfert function
%rename layers 1 & 2:
net.layers{1}.name= 'Hidden';
net.layers{2}.name= 'Output';
net.layers{1}.size = nb_neurons; %nb hidden neurons
net.layers{1}.initFcn = 'initnw'; %'initnw' stands for the Nguyen-Widrow layer initialization function.
net.layers{2}.initFcn = 'initnw'; % same init.
net.inputs{1}.processFcns = {'removeconstantrows','mapminmax'};
%Set the size (nb of elements) of each inputs:
for i=1:nb_inputs_sources
net.inputs{i}.size = data_size;
end
net.dividefcn = 'dividerand'; %Create cross-validation data sets
net.trainFcn = 'trainlm'; %Levenberg-Marquardt training.
net.performFcn = 'mse'; %Mean Squared error as error
% Adaption
net.adaptFcn = 'adaptwb'; %name of the function used to update weights
for i=1:nb_inputs_sources
net.inputWeights{1,i}.learnFcn = 'learngdm'; %Gradient descent
end
net.layerWeights{find(net.layerConnect)'}.learnFcn = 'learngdm'; %Gradient descent
net.biases{:}.learnFcn = 'learngdm';
My data is stored in the 233x10 DBase matrix where the (2:8) columns are the 7 features and column 10 is the target, each column (features and target) containing 233 samples. At first sight, I launched the train algorithm:
train(net, DBase(:,2:8), DBase(:,10));
and ran into the typical error:
Number of inputs does not match net.numInputs.
As I came across several forum entries, I understood that the number of inputs is not directly related to the number of features, but rather data sources plugged to various layers. Thus, I changed the architecture of the network to 1 input containing all the data features plus outputs:
net.numInputs = 1;
net.inputs{1}.size = 7;
[net, tr, y, e] = train(net, DBase(:,2:8)', DBase(:,10)');
And this seems to work. But I have a couple of questions/issues:
- Is this the correct way to proceed with a simple feed-forwardNN and the network(...) function? +Is the train(...) function used properly ?
- As I train the Network, I get target values in the [0,1] range, while the real outputs values rather range in the [1,30] interval. The mapminmax normalization make all the inputs range in the [0,1] interval, but I feel that I'm missing something.
- I could not figure if the weights returned after the training are the 'best' estimates (ie. the ones estimated at the 'best epoch': where both validation and training sets present low errors for instance).
Thank you very much.
0 comentarios
Respuestas (1)
Greg Heath
el 20 de Mayo de 2016
You can rewrite your program in about 10 lines if you take advantage of defaults. See the code in
help fitnet % For regression/curve-fitting
doc fitnet
or
help patternnet % For classification/pattern-recognition
doc patternnet
After you get a working net, net1, then try to duplicate it using
net2 = network;
Use the command
net1 = net1 % No semicolon
to see how to put net2 together.
Hope this helps.
Greg
Ver también
Categorías
Más información sobre Modeling and Prediction with NARX and Time-Delay Networks en Help Center y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!