How to forecast with Neural Network?

10 visualizaciones (últimos 30 días)
Goryn
Goryn el 14 de Jun. de 2011
Comentada: Eric el 18 de Mzo. de 2016
I'm using MATLAB R2011a. I'm trying to predict next 100 points of time-serie X by means of neural net. Firstly, I create input time series Xtra and feedback time series Ytra:
lag = 50;
Xu = windowize(X,1:lag+1); %Re-arrange the data points into a Hankel matrix
Xtra = Xu(:,1:lag); %input time series
Ytra = Xu(:,end); %feedback time series
Then I train neural net with this code:
inputSeries = tonndata(Xtra,false,false);
targetSeries = tonndata(Ytra,false,false);
% Create a Nonlinear Autoregressive Network with External Input
inputDelays = 1:2;
feedbackDelays = 1:2;
hiddenLayerSize = 10;
net = narxnet(inputDelays,feedbackDelays,hiddenLayerSize);
% Prepare the Data for Training and Simulation
[inputs,inputStates,layerStates,targets] = preparets(net,inputSeries,{},targetSeries);
% Setup Division of Data for Training, Validation, Testing
net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;
% Train the Network
[net,tr] = train(net,inputs,targets,inputStates,layerStates);
% Test the Network
outputs = net(inputs,inputStates,layerStates);
errors = gsubtract(targets,outputs);
performance = perform(net,targets,outputs)
And then I would like to predict next 100 points of my initial time-serie X, what should I do?
Thanks in advance for your answers.

Respuesta aceptada

Greg Heath
Greg Heath el 22 de Oct. de 2014
0. Incorrect use of the word 'lag'
1. It is rare that the default input parameters (ID,FD,H) are sufficient. They can be improved by using a subset of significant lags determined from the auto and cross-correlation functions and then searching over a range of H values. The smallest acceptable value of H should be used.
2. The default 'dividerand' should be overwritten (e.g., 'divideblock') to optimize the effectiveness of the significant correlation lags found in 1.
3. Train using the syntax
[net tr Ys Es Xf Af ] = train(net,Xs,Ts,Xi,Ai);
to use Xf and Af as intial conditions for continuation data
4. After closing the loop, test the CL net on the original data. If performance is not good compared to the OL performance, train the CL net beginning with the weights obtained with the OL training.
5. Since you only have 1 series, you should have used NARNET. To continue beyond the original data
Xnew = net(NaN(1,100),Xf,Af);
Hope this helps
Greg
  1 comentario
Eric
Eric el 18 de Mzo. de 2016
Hello, Greg! Regarding the above answer from you...
Please, can you write the syntax for Preparets as you did in (3) for train?
I want use NARNET for predict USD price beyond original data.
Like to try your method: Xnew = net(NaN(1,100),Xf,Af);
Thanks!

Iniciar sesión para comentar.

Más respuestas (6)

Mark Hudson Beale
Mark Hudson Beale el 14 de Jun. de 2011
You can convert the NARXNET from open-loop to closed-loop form to predict ahead any number of timesteps for which you have data for your input time series.
If you want to predict N steps ahead then:
% define N+2 timesteps of inputs
Xtra_predict = { ... };
% define N+2 feedback values, but ONLY first 2 need to be
% defined, the rest can be NaN (i.e. unknown) values. The N unknown
% values will be the N predictions the network will make. You could
% use the last 2 values of Ytra if you want to make the prediction directly
% after your training data sereies.
Ttra_predict = { ... }
netCL = closeloop(net);
[X,Xi,Ai,T] = preparets(net,Xtra_predict,{},Ttra_predict);
y = sim(net,X,Xi,Ai)
After this code is run, y will now have N preditions based on the input series Xtra_predict and the initial two steps of Ttra_predict.

Jack
Jack el 4 de Sept. de 2011
Hi All,
I have the same problem and looked everwhere to find an answer ... :-((
How would you define N+2 timestemps of inputs here ? What the code would look like ?
Xtra_predict = ?????????
Ttra_predict = ??????????
Thank you very much !

Justin
Justin el 2 de Nov. de 2011
It doesn't seem that this can be done, I've been looking into it for a few weeks now, and although there seems to be a few interesting responses, in application, they always run into errors pertaining to the number of inputs into the network.

Cristhiano Moreno
Cristhiano Moreno el 5 de Nov. de 2011
How can i do it with a NAR Network ? I have a temporal serie with 99 values, and i want to predict the 100 value. And finally plot this 2 series y the same figure.
Somebody can help me ?

Vito
Vito el 5 de Nov. de 2011
Let's play game. We throw a coin and we try to guess what side it has fallen. Certainly we don't know, and mathematical expectation = 0.5.
But after 10 throws we can count real mathematical expectation for this game. In due course, we will get used to this coin and we can guess to (predict).
At a network the same principle. Its principal advantage that she can remember all games. Therefore we can receive dependence of expected value from actual, on some time interval.
Let's create the elementary linear network.
delays = [0 1 2 3 4 5];
pi = {1 2 3 4 5};
net = newlin ([0 1], 1, delays);
Random variable.
GamVar = randsample ([0,1], 1, true, [.7,.3]); % one random variable
Also we will play 100 batches on 10 throws.
delays = [0 1 2 3 4 5 ];
pi ={1 2 3 4 5 };
net = newlin([0 1],1,delays);
S=[];
s=1;
t=1;
Me=[];
time=1;
step =1;
GamVar = randsample([0,1],1,true,[.7,.3]);% one random variable
T = con2seq([GamVar]);
P = T;
TM = con2seq(median(cat(2,T{:})));
%
Prize=0;
for mm=1:100
%---------
hold on
c=0;
y=[];
if mm ==1
T={mean(cat(2,T{:}))};
Me=T;
P=T;
TM=T;
PM=TM;
else % accamulate mean
T={mean([cat(2,T{:}) Me{:}])};
Me=T;
P=T;
TM = T;
PM=TM;
end
GamVar = (randsample([0,1],10,true,[.7,.3]));% one random variable
for i = 1:10
GamVarT{i}=GamVar(i);
T{i} = GamVar(i);
P = T;
if i>1
TM ={T{1:i-1} mean(cat(2,TM{1:i-1}))};
else
TM{i}=mean(cat(2,TM{:}));
end
PM=TM;
y= sim(net,PM,pi);
% correction
if any([(round(cat(2,y{1:i})))~=cat(2,GamVarT{1:i})])
TM = T;
% in input - the mean, in output - real value
[net,y,e]=adapt(net,PM,TM,pi);
plot(i,round(double(y{i})),'or','LineWidth',2,'MarkerSize',10);
c=c+1;
end
time=time+1;
plot(1:size(PM,2),round(double(cat(2,y{:}))),'o--',1:size(P,2),cat(2,T{:}),'-*');
xlabel('Time');
axis([0 i -4 4]);
drawnow
if ~strcmp(get(get(0,'CurrentFigure'),'SelectionType'),'normal')
disp('stop')
break
end
end
disp('Count error predaction')
c
disp('Relation error predaction / All samples')
(c/i)*100
S(s)=(c/i)*100;
s=s+1;
mean(S)
if ~strcmp(get(get(0,'CurrentFigure'),'SelectionType'),'normal')
disp('stop')
break
end
close all
disp('Prize')
Prize= Prize+(10-c)-c
step=step+10;
end
We have benefited! If we will play once again the scoring will increase.
This very simple example shows that it is possible to predict on very wide intervals of time. Other network and other adjustments allow to achieve much the best results. Good luck. The code for version 7.2.
  2 comentarios
Cristhiano Moreno
Cristhiano Moreno el 7 de Nov. de 2011
Great.
But how using NAR NN ?
Muhammad Hasnat
Muhammad Hasnat el 15 de En. de 2014
I fail to understand any relation with what was being asked and this game.

Iniciar sesión para comentar.


Vito
Vito el 7 de Nov. de 2011
Thank. :) Has only shown the general principle. Or one of variants. Creation neurosystems, on former art. Also leans against the creative approach. Before accurate formalization it is still far. Probably you offer the decision.
Therefore such subjects - an exchange of ideas, are always interesting and extremely useful.
Best regards.

Categorías

Más información sobre Sequence and Numeric Feature Data Workflows en Help Center y File Exchange.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by