As predicted delayed outputs settle in NarX
Mostrar comentarios más antiguos
I applied my code to data simplenarx_dataset. To do this I performed the following steps:
1 - I have done autocorrelation and cross correlation peaks to see that gives us more information. ID = 1, FD = 1
2 - I have found H, where H = 5
3 - I have created the network and have evaluated the details. Although the purpose of this post is not to evaluate the details but understand why you see a delayed response when performing closeloop, but public details and code In case of emergency there is some other error: My code is as follows(I used 80 data for training the network and 20 to check with closeloop):
p=p';
t=t';
p1=p(1:1,1:80);
p2=p(1:1,81:end);
t1=t(1,1:80);
t2=t(1,81:end);
inputSeries = tonndata(p1,true,false);
targetSeries = tonndata(t1,true,false);
inputDelays = 1:1;
feedbackDelays = 1:1;
hiddenLayerSize = 5;
net = narxnet(inputDelays,feedbackDelays,hiddenLayerSize);
[inputs,inputStates,layerStates,targets] = preparets(net,inputSeries,{},targetSeries);
net.divideFcn='divideblock';
net.divideParam.trainRatio=0.70;
net.divideParam.valRatio=0.15;
net.divideParam.testRatio=0.15;
[I N]=size(p1);
[O N]=size(t1);
N=N-1;
Neq=N*O;
ID=1;
FD=1;
Nw = (ID*I+FD*O+1)*hiddenLayerSize+(hiddenLayerSize+1)*O;
Ntrneq = N -2*round(0.15*N);
Ndof=Ntrneq-Nw;
ttotal=t1(1,1:N);
MSE00=mean(var(ttotal,1));
MSE00a=mean(var(ttotal,0));
t3=t(1,1:N);
[trainInd,valInd,testInd] = divideblock(t3,0.7,0.15,0.15);
MSEtrn00=mean(var(trainInd,1));
MSEtrn00a=mean(var(trainInd,0));
MSEval00=mean(var(valInd,1));
MSEtst00=mean(var(testInd,1));
net.trainParam.goal = 0.01*Ndof*MSEtrn00a/Ntrneq;
[net,tr,Ys,Es,Xf,Af] = train(net,inputs,targets,inputStates,layerStates);
outputs = net(inputs,inputStates,layerStates);
errors = gsubtract(targets,outputs);
MSE = perform(net,targets,outputs);
MSEa=Neq*MSE/(Neq-Nw);
R2=1-MSE/MSE00;
R2a=1-MSEa/MSE00a;
MSEtrn=tr.perf(end);
MSEval=tr.vperf(end);
MSEtst=tr.tperf(end);
R2trn=1-MSEtrn/MSEtrn00;
R2trna=1-MSEtrn/MSEtrn00a;
R2val=1-MSEval/MSEval00;
R2tst=1-MSEtst/MSEtst00;
and my results are:
ID=1
FD=1
H=5
N=79
Ndof=34
Neq=79
Ntrneq=55
Nw=21
O=1
I=1
R2=0.8036
R2a=0.7347
R2trn=0.8763
R2trna=0.8786
R2val=0.7862
R2tst=0.7541
As I mentioned earlier, I will not focus much on the accuracy in the answer but later will. The code I applied for closeloop was:
netc = closeloop(net);
netc.name = [net.name ' - Closed Loop'];
view(netc)
NumberOfPredictions = 15;
s=cell2mat(inputSeries);
t4=cell2mat(targetSeries);
a=s(1:1,79:80);
b=p2(1:1,1:15);
newInputSeries=[a b];
c=t4(1,80);
d=nan(1,16);
newTargetSet=[c d];
newInputSeries=tonndata(newInputSeries,true,false);
newTargetSet=tonndata(newTargetSet,true,false);
[xc,xic,aic,tc] = preparets(netc,newInputSeries,{},newTargetSet);
yPredicted = sim(netc,xc,xic,aic);
w=cell2mat(yPredicted);
plot(cell2mat(yPredicted),'DisplayName','cell2mat(yPredicted)','YdataS
ource','cell2mat(yPredicted)');figure(gcf)
plot(t2,'r','DisplayName','targetsComprobacion')
hold on
plot(w,'b','DisplayName','salidasIteradas')
title({'ITERACCIONES'})
legend('show')
hold off
and the result was the chart that you have indicated the link below where you will see it:
In this picture we see the blue line (line outputs predicted) lags behind the red line (real targets). I'd like to know how I can do to get that blue line is in front of the red line, that is one step get out early. As I said, in this post I want to focus on why this happens and how I can fix it.
thank you very much
Respuesta aceptada
Más respuestas (2)
Greg Heath
el 30 de Mzo. de 2013
Editada: Greg Heath
el 7 de Abr. de 2013
1. I do not have autocorr or parcorr AND your plots are too fuzzy
for me to understand.
2.Since training parameters should only depend on training data,
I use ztrn =zscore(ttrn,1), autocorrt = nncorr(ztrn,ztrn,Ntrn-1,'biased')
and find a 95% confidence level for significant auto AND
cross-correlations with abs values >= 0.14 by repeating 100
times and averaging:
a. Crosscorrelate ztrn with n = zscore(randn(1,Ntrn),1)
b. Correct the nncorr symmetry bug by concatenating:
crosscorrzn = [ crosscorrnz(1:Ntrn-1) crosscorrzn(Ntrn:end)]
c. Sort the absolute values and find the significance threshold
as the value at index floor( 0.95*(2*Ntrn-1))
3. You used crosscorrxt instead of crosscorrtx. You found the
max of crosscorrxt instead of all abs values >= the significance
threshold for nonnegative lags (indices >= Ntrn) of crosscorrtx.
THIS MAY ACCOUNT FOR YOUR ERRONEOUS SHIFT
4. The ideal lags are ID = 1, FD = 1,2
5. To determine Hub use Ntrneq, NID and NFD (see your equation
for Nw)
6. I typically find Ntrials = 5 to be too small
7. You are using divideblock inside the inner loop. Since the data
division doesn't change, use divideblock before the outer loop
to obtain the indices and the data splits ttrn,tval,tst, etc.
8. Inside the inner loop just assign the indices to
net.divide.trainInd etc.
9. Continuously find and save the best net within the inner loop
after you have R2val to compare with the current R2valmax.
Then when you exit the outer loop you will already have the best
net (and all of it's statistics) to use for the closeloop stage.
10. Test the netc on all of the data and overlay plot yc on tsc.
11. Train netc and compare trn/val/tst results with the openloop
solution and the untrained closeloop solution.
12. Unfortunately, I have found that a good closeloop solution
depends very critically on the choices of ID, FD and H. Just
accepting the defaults is very suboptimal.
Hope this helps.
- Thank you for formally accepting my answer*
Greg
1 comentario
FRANCISCO
el 4 de Abr. de 2013
Greg Heath
el 7 de Abr. de 2013
0 votos
1. Your use of the words pump, rejections, cookies and futulos(not English) are not appropriate and somewhat confusing.
2. Once you have an openloop solution, y, netc = closeloop(net); followed by preparets and yc = netc(Xsc,Xic,Aic) should yield your closeloop output.
3. If yc and y differ substantially, you can either continue training the openloop configuration and repeat OR, directly train netc.
4. As long as ID > 0, you have a predictive net. So, I think you are misinterpreting your output.
2 comentarios
Constantin Silkin
el 19 de Feb. de 2017
I'm using NARX and have just the same problem. Actually the network doesn't carry out a prediction. Instead it repeats slightly changed previous value. Nevertheless, I could organize a prediction! For this purpose I prolong TargetSeries and InputSeries. I just duplicate the last value of these series. In fact I give for network own option of a prediction by the principle "tomorrow will be same as today". The network processes this naive forecast according to own knowledge and corrects it. Accuracy of such prediction is about 70%. Whether I am right?
Greg Heath
el 19 de Feb. de 2017
Sorry, I do not understand.
Greg
Categorías
Más información sobre Modeling and Prediction with NARX and Time-Delay Networks en Centro de ayuda y File Exchange.
Productos
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!