Bounded neural network output
10 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
Kilin
el 12 de Nov. de 2014
Comentada: Kilin
el 22 de Nov. de 2014
Hi guys,
I have a couple of questions related to NNs. I want to define a neural network in which outputs are bounded in a user-defined range. How could I do that?
First thing I tried was to change the output transfer function to tansig, expecting the output to be in the range [-1, 1]. Well, it doesn't. Any idea why?
Then, during some tweaking, I've tried to set all the weights and biases of the net to zero, and expected at least to see a zero output. Well, again, the output is 1.5. Why? What am I missing?
Here the code I've experimented with.
HIDDEN_LAYER_SIZE = 10;
net = fitnet(HIDDEN_LAYER_SIZE);
% cheating to set input and output dimensions
x = [1 1 1; 2 2 2]'; t = [1 1; 2 2]';
net = configure(net, x, t);
net.layers{2}.transferFcn = 'tansig';
display('TANSIG OUTPUT');
net([1 1 1]') % expecting something in [-1, 1]
display('TANSIG OUTPUT WITH ALL ZERO W and b');
net = setwb(net,zeros(1,net.numWeightElements));
net([1 1 1]') % expecting 0
Output:
TANSIG OUTPUT
ans =
1.0254
1.0338
TANSIG OUTPUT WITH ALL ZERO W and b
ans =
1.5000
1.5000
Any help will be highly appreciated!
All the best,
Francesco
0 comentarios
Respuesta aceptada
Greg Heath
el 13 de Nov. de 2014
clear all, clc
H = 10;
net = fitnet(H);
b1 = net.b{1} % zeros(10,1)
IW = net.IW{1,1} % Empty matrix: 10-by-0
b2 = net.b{2} % Empty matrix: 0-by-1
LW = net.LW{2,1} % Empty matrix: 0-by-10
% cheating to set input and output dimensions
%% CHEATING?? I have no idea what that is supposed to mean
x = [1 1 1; 2 2 2]'; t = [1 1; 2 2]';
%%x = repmat([1 2], 3,1], x = repmat([1 2], 2,1]
[ I N ] = size(x) % [ 3 2 ]
[ O N ] = size(t) % [ 2 2]
%% When configured, the number of weights will be
Nw = (I+1)*H+(H+1)*O %62
net = configure(net, x, t);
b1 = net.b{1} % size(b1) = [10 1 ]
IW = net.IW{1,1} % size(IW) = [10 3 ]
b2 = net.b{2} % size(b2) = [ 2 1 ]
LW = net.LW{2,1} % size(LW) = [ 2 10]
%% With the default 'purelin' output
y = net(x) %[ 1.714 0.80213; 0.59833 2.813 ]
%% With the 'tansig' output replacement % expecting something in [-1, 1]
net.layers{2}.transferFcn = 'tansig';
y = net(x) %[ 1.7019 1.0578 ; 1.0264 1.9948 ]
1. Ordinarily the net is trained with x and t to try to give an output y with a small error e = t-y. None of these quantities are necessarily bounded in range.
2. Now, even though you configured the net with random weights, you never trained the net. Therefore, your e =
t-y = [ -0.7019 0.9422; -0.02642 0.005209 ]
is not small.
3. By default, the net transforms the target to fit in [-1,1] The corresponding normalized output, yn, then fits in [-1 1]. However, by default, the net automatically reverses the transform to obtain y from yn.
Hope this helps.
Thank you for formally accepting my answer
Greg
3 comentarios
Greg Heath
el 14 de Nov. de 2014
1. Configuring with tansig AND a target bounded in [-1,1] are necessary for the output to be bounded in [-1,1]
2. When all weights are zero, the nonzero output must come from de-normalization.
clear all, clc
rng('default')
H = 10;
net = fitnet(H);
x = [1 1 1; 2 2 2]'; t = [1 1; 2 2]';
net = configure(net, x, t);
y1 = net(x) % [ 0.9252 1.1196 ; 2.916 2.3395]
net.layers{2}.transferFcn = 'tansig';
y2 = net(x) % [ 1.0912 1.1792 ; 1.9965 1.9664 ]
net = configure(net, x, t);
y3 = net(x) % [ 1.9323 1.0115 ; 1.8776 1.7376 ]
net = fitnet(H);
x = [1 1 1; 2 2 2]'; t = [-1 -0.5; 0.5 1]';
net = configure(net, x, t);
y1 = net(x) % [ 1.5652 -0.18302 ; 2.1597 0.98423]
net.layers{2}.transferFcn = 'tansig';
y2 = net(x) % [ 0.48824 -0.1832 ; 0.99084 0.81447]
net = configure(net, x, t);
y3 = net(x) % [ -0.96228 -0.91735 ; -0.3731 -0.45783 ]
net = setwb(net,zeros(1,net.numWeightElements));
b1 = net.b{1} % zeros( 10, 1 )
IW =net.IW{1,1} % zeros( 10, 3 )
b2=net.b{2} % zeros( 2 ,1 )
LW = net.LW{2,1} % zeros( 2 , 10 )
y4 = net(x) % [ -0.25 -0.25 ; 0.25 0.25 ]
net=configure(net,x,t)
y5 = net(x) % [ -0.16752 -0.99733 ; 0.846 0.77261 ]
}
Más respuestas (0)
Ver también
Categorías
Más información sobre Function Approximation and Nonlinear Regression en Help Center y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!