Output of neural network is offset and scaled.. help!?
7 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
Søren Jensen
el 28 de Abr. de 2015
Respondida: Søren Jensen
el 29 de Abr. de 2015
I am trying to simulate the outputs for a neural network myself for later translation to java so i can run it on a mobile device. For this i generated the following simulation code for a network with two hidden layers and tangent-sigmoid nonlinear function at all layers:
function [ Results ] = sim_net( net, input )
y1 = tansig(net.IW{1} * input + net.b{1});
y2 = tansig(net.LW{2} * y1 + net.b{2});
Results = tansig(net.LW{6} * y2 + net.b{3});
end
The sim_net function is then held up against matlab's own functions using the following code:
clc
clear all
net = feedforwardnet([20 20]);
net.divideParam.trainRatio = 75/100; % Adjust as desired
net.divideParam.valRatio = 15/100; % Adjust as desired
net.divideParam.testRatio = 10/100; % Adjust as desired
net.inputs{1}.processFcns = {}; % no preprocessing
net.outputs{2}.processFcns = {};
net.layers{1}.transferFcn = 'tansig';
net.layers{2}.transferFcn = 'tansig';
net.layers{3}.transferFcn = 'tansig';
% Train and Apply Network
[x,t] = simplefit_dataset;
[net,tr] = train(net,x,t);
for i=1:length(x)
disp(i) = sim_net(net,x(i));
disp2(i) = sim(net,x(i));
end
plot(disp)
hold on
plot(disp2)
legend('our code','matlabs code')
the plot of the two outputs:
however, a quick inspection using the following edit reveals that matlabs results are offset by a factor of 5, and scaled by a factor of 5 also
plot(disp)
hold on
plot((disp2-5)/5+0.1)
legend('our code','matlabs code')
However, matlab's net function shouldn't even be able to give values above 1 when using tansig as the last activation function?
1 comentario
Greg Heath
el 28 de Abr. de 2015
One hidden layer is sufficient since it is also a universal approximator.
The fewer weights that are used, the more robust the design.
Respuesta aceptada
Greg Heath
el 28 de Abr. de 2015
Editada: Greg Heath
el 28 de Abr. de 2015
The function input should be weights, instead of the net.
Is LW{6} a typo?
One hidden layer is sufficient since it is a universal approximator.
Since the function is smooth with four local extrema, only four hidden nodes are necessary.
The fewer weights that are used, the more robust the design. Also, this will make your java coding much easier.
Please use the notation h to denote the output from a hidden layer.
Your output can have any scale because of the default normalization/denormalization used within train
Code should be faster if you use dummy variables like IW = net.IW{1,1}, b1 = , b2 = , LW = ..
Thank you for formally accepting my answer
Greg
0 comentarios
Más respuestas (1)
Ver también
Categorías
Más información sobre Sequence and Numeric Feature Data Workflows en Help Center y File Exchange.
Productos
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!