what are NARX Function inputs "X" and "Xi"? Whats is an example of both?
8 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
Allow me to preface I am somewhat new to Matlab, and Neural Networks. Despite this I have created a NARX function to predict multiple steps ahead, It has been trained and I am happy with the output.
1. Why is the output of shorter length than the input? I understand there is a shifting timeframe, so does this mean once the delay is removed, I am getting the prediction 1 step ahead?...hence the difference length?
2. What form do the inputs to the function take. I can see that X is a cell array of 'input' and 'target', but what is "Xi" and "-"
Whare are 'input delay states' and why a 2x2 cell?
% Xi = 2x2 cell 2, initial 2 input delay states. % Each Xi{1,ts} = 1xQ matrix, initial states for input #1. % Each Xi{2,ts} = 1xQ matrix, initial states for input #2.
function [Y,Xf,Af] = myNeuralNetworkFunction(X,Xi,~) %MYNEURALNETWORKFUNCTION neural network simulation function. % % Generated by Neural Network Toolbox function genFunction, 11-Aug-2017 08:47:44. % % [Y,Xf,Af] = myNeuralNetworkFunction(X,Xi,~) takes these arguments: % % X = 2xTS cell, 2 inputs over TS timesteps % Each X{1,ts} = 1xQ matrix, input #1 at timestep ts. % Each X{2,ts} = 1xQ matrix, input #2 at timestep ts. % % Xi = 2x2 cell 2, initial 2 input delay states. % Each Xi{1,ts} = 1xQ matrix, initial states for input #1. % Each Xi{2,ts} = 1xQ matrix, initial states for input #2. % % Ai = 2x0 cell 2, initial 2 layer delay states. % Each Ai{1,ts} = 10xQ matrix, initial states for layer #1. % Each Ai{2,ts} = 1xQ matrix, initial states for layer #2. % % and returns: % Y = 1xTS cell of 2 outputs over TS timesteps. % Each Y{1,ts} = 1xQ matrix, output #1 at timestep ts. % % Xf = 2x2 cell 2, final 2 input delay states. % Each Xf{1,ts} = 1xQ matrix, final states for input #1. % Each Xf{2,ts} = 1xQ matrix, final states for input #2. % % Af = 2x0 cell 2, final 0 layer delay states. % Each Af{1ts} = 10xQ matrix, final states for layer #1. % Each Af{2ts} = 1xQ matrix, final states for layer #2. % % where Q is number of samples (or series) and TS is the number of timesteps.
%#ok<*RPMT0>
% ===== NEURAL NETWORK CONSTANTS =====
% Input 1 x1_step1.xoffset = 1.6758; x1_step1.gain = 2.56574727389352; x1_step1.ymin = -1;
% Input 2 x2_step1.xoffset = 1.6758; x2_step1.gain = 2.56574727389352; x2_step1.ymin = -1;
% Layer 1 b1 = [2.5713526297760567196;-2.0096071730491433804;1.219887714138954804;1.5780200416537291108;0.25816093584503307934;0.18074593510612815828;0.70718796247587500936;-0.64421851569067067889;-1.7185815490127793748;2.6543854248661524764]; IW1_1 = [-0.16869542244623322857 -1.1712370140185244249;1.7602248106014910523 1.7561838897145030103;-1.4532469319368439553 -0.20396299675279816466;-1.2259478848587126443 1.442053061998331609;1.0740243238755720068 -1.496438098993799537;0.11179379980948041251 -1.7099632172394532148;-0.013198512334931664786 0.38902590336003639582;-0.33160470518089074643 -0.87688059602713563923;-2.2476574460726266302 -1.0059096087535042141;0.38300707211150541998 -0.2317876417318727178]; IW1_2 = [1.7933907521546201824 1.0452899104755479787;0.45472529234613878746 1.9311747950676203534;-1.2824630670549146405 -0.65160240846466466191;-1.1203828086453961888 -0.67877992281461829727;0.019576524613524378532 -1.5007152873252320724;0.9449462904564566168 0.98542655430178127673;-0.58499269871995085435 2.0323103368093251575;1.4512761139696179757 1.5427372349806167673;1.1522033707836289995 -0.36687974450126964454;1.3760674937190295886 -1.9352496628660595945];
% Layer 2 b2 = -0.27118710073177765274; LW2_1 = [0.66967935811621492892 0.12797614436784843228 0.43538693605978834311 0.046209366565424604689 0.15444805117195750666 0.53450744499729629933 0.11465064833947560818 1.0837510139972506007 -0.15469602738558474453 -0.41093430476120473838];
% Output 1 y1_step1.ymin = -1; y1_step1.gain = 2.56574727389352; y1_step1.xoffset = 1.6758;
% ===== SIMULATION ========
% Format Input Arguments isCellX = iscell(X); if ~isCellX X = {X}; end if (nargin < 2), error('Initial input states Xi argument needed.'); end
% Dimensions TS = size(X,2); % timesteps if ~isempty(X) Q = size(X{1},2); % samples/series elseif ~isempty(Xi) Q = size(Xi{1},2); else Q = 0; end
% Input 1 Delay States Xd1 = cell(1,3); for ts=1:2 Xd1{ts} = mapminmax_apply(Xi{1,ts},x1_step1); end
% Input 2 Delay States Xd2 = cell(1,3); for ts=1:2 Xd2{ts} = mapminmax_apply(Xi{2,ts},x2_step1); end
% Allocate Outputs Y = cell(1,TS);
% Time loop for ts=1:TS
% Rotating delay state position
xdts = mod(ts+1,3)+1;
% Input 1
Xd1{xdts} = mapminmax_apply(X{1,ts},x1_step1);
% Input 2
Xd2{xdts} = mapminmax_apply(X{2,ts},x2_step1);
% Layer 1
tapdelay1 = cat(1,Xd1{mod(xdts-[1 2]-1,3)+1});
tapdelay2 = cat(1,Xd2{mod(xdts-[1 2]-1,3)+1});
a1 = tansig_apply(repmat(b1,1,Q) + IW1_1*tapdelay1 + IW1_2*tapdelay2);
% Layer 2
a2 = repmat(b2,1,Q) + LW2_1*a1;
% Output 1
Y{1,ts} = mapminmax_reverse(a2,y1_step1);
end
% Final Delay States finalxts = TS+(1: 2); xits = finalxts(finalxts<=2); xts = finalxts(finalxts>2)-2; Xf = [Xi(:,xits) X(:,xts)]; Af = cell(2,0);
% Format Output Arguments if ~isCellX Y = cell2mat(Y); end end
% ===== MODULE FUNCTIONS ========
% Map Minimum and Maximum Input Processing Function function y = mapminmax_apply(x,settings) y = bsxfun(@minus,x,settings.xoffset); y = bsxfun(@times,y,settings.gain); y = bsxfun(@plus,y,settings.ymin); end
% Sigmoid Symmetric Transfer Function function a = tansig_apply(n,~) a = 2 ./ (1 + exp(-2*n)) - 1; end
% Map Minimum and Maximum Output Reverse-Processing Function function x = mapminmax_reverse(y,settings) x = bsxfun(@minus,y,settings.ymin); x = bsxfun(@rdivide,x,settings.gain); x = bsxfun(@plus,x,settings.xoffset); end
0 comentarios
Respuestas (1)
Faiz Gouri
el 18 de Ag. de 2017
The following documents will be helpful for you-
1 comentario
Greg Heath
el 19 de Ag. de 2017
You might find some of my posts in both NEWSGROUP and ANSWERS helpful
greg narxnet
Hope this helps.
Greg
Ver también
Categorías
Más información sobre Sequence and Numeric Feature Data Workflows en Help Center y File Exchange.
Productos
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!