use an arbitrary error in training procedure of a neural network

2 visualizaciones (últimos 30 días)
I need to feed an arbitrary error to a neural network in training procedure:
When there is some input output data of a black box and we are modeling the black box with a net, 'trainlm' (for example) can be used. The error that propagates back to the net, is the mse of the calculated outputs and the Targets. Then, assume that we have a system consist of 2 parts: a black box and a known part. I need to estimate the behavior of the black box part of the system using a neural network. There is no input-output data for the net, but we have input-output data for the spoken system. therefor we can calculate an error = mse("outputs of the system" - "Targets of the system") that should be fed to training algorithm as training error. How can I do that?
Thanks for your time,

Respuesta aceptada

Greg Heath
Greg Heath el 2 de Feb. de 2012
Go back to the derivation of the change in weights for the output net(x) and modify it for whichever topology you are contemplating:
x input
y output
t target
y0 = f0(x) known
parallel: y = f0(x) + net(x)
e = (t-f0) - net(x)
series1 y = net(f0(x))
e = t-net(f0(x))
series2 y = f0(net(x))
e = t-f0(net(x))
grad(e.^2) = 2*e.*grad(e)
Hope this helps.
Greg
  3 comentarios
Greg Heath
Greg Heath el 5 de Feb. de 2012
Sorry,I am not familiar with the MATLAB source codes of the advanced training functions like LM and CG. If you are not familiar either, try the simplest code(s) like TRAINGD first.
Hope this helps.
Greg
Greg Heath
Greg Heath el 5 de Feb. de 2012
Which versions of MATLAB and the NNTB do you have?
Which topology are you concerned about?
Greg

Iniciar sesión para comentar.

Más respuestas (1)

haMed
haMed el 9 de Feb. de 2012
dear Greg,
I use MatLab 2010b, and the topology is http://i44.tinypic.com/ofofmd.png.
I have to change the performance function. The performance functions in MatLab should be written like the template "template_performance.m". I am using the "mse.m" as template.
There are 2 functions "performance_y" and "dperf_dy" in the "mse.m" that must be changed (I think!).
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function e = adjusted_errors(net,t,y,ew,param)
e = gsubtract(t,y);
e = adjust_error(net,e,ew,param);
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function [perfy,n] = performance_y(net,t,y,ew,param)
e = adjusted_errors(net,t,y,ew,param);
[perfy,n] = meansqr(e);
perfy = perfy * (1-param.regularization);
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
function d = dperf_dy(net,t,y,ew,perf,param)
e = adjusted_errors(net,t,y,ew,param);
d = cell(size(e));
n = 0;
for i=1:numel(e)
di = e{i};
dontcares = find(~isfinite(di));
di(dontcares) = 0;
d{i} = di;
n = n + numel(di) - length(dontcares);
end
m = (-2/n) * (1-param.regularization);
for i=1:numel(d)
d{i} = m * d{i};
end
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
  3 comentarios
haMed
haMed el 10 de Feb. de 2012
The net is dynamic. 0:1 and 1:4 are time delays.
5 is the number of hidden neurons like 0 in output layer. 0 and 0 are
actually 1 both, that shown the number of output(s).
Finally 1 and 4 are the size of inputs: first input (nu(t)) is 1*1 matrix and 2nd one (x(t)) is 4*1.
The output of this network is
{function output = mynetsim(net,Xs,Xi)
numdi1 = max(net.inputWeights{1,1}.delays);
numdi2 = max(net.inputWeights{1,2}.delays);
X = [Xi Xs];
P1 = cell2mat(X(1,end-numdi1:end))';
P2 = cell2mat(X(2,end-numdi2:end))';
O1 = tansig(net.IW{1,1}*P1+net.IW{1,2}*P2+net.b{1});
O2 = net.IW{2,1}*P1(end)+net.LW{2,1}*O1+net.b{2};
output = O2;
end}
haMed
haMed el 10 de Feb. de 2012
The problem is that I dont know which part of the mse.m should be changed.
haMed :)

Iniciar sesión para comentar.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by