Borrar filtros
Borrar filtros

Need help! I try to build Neural Network but it does not fit well with the training data!

1 visualización (últimos 30 días)
I check my feed forward and backpropagation but cant find error, please help! Thank you so much.
Here is my code:
%% Prepare data
% train data:
I=1; % number of independent variables
F=1; % number of dependent varaiables
Ib=[-pi:0.1:(4*pi)].'; % input matrix
Yb= sin(Ib); % (actual) output matrix
P=size(Ib,1); % number of observations
% Normalize:
Input(:,1)=(Ib(:,1)-min(Ib(:,1)))/(max(Ib(:,1))-min(Ib(:,1)));
Y(:,1)=(Yb(:,1)-min(Yb(:,1)))/(max(Yb(:,1))-min(Yb(:,1)));
%% Neural Network:
% hyperparameter for NN:
Gen=10^8; % number of times train neural network
Epsi= 10^(-7); % tolerance
SSEold=999; % sum squared error
m=0; % update step size
A=1;
B=10000;
mu=A/(B+m); % step size or rate of learning
% layer:
N=1; % number of hidden layers
H=15; % number of hidden nodes
g=0.01;
% Initial parameter:
W1=rand(I+1,H);
X=g.*rand(H,F);
bias=g*rand(P,1); % bias node
bias1=g*rand(1,1); % bias weight to output layer
% Add bias node to Input layer:
Input=cat(2,Input,bias);
Err=10;
while (Err >Epsi && m < Gen)
%% Forward probagation:
Vp=Input*W1;
Vp=1./(1+exp(-Vp));
Yhat=bias1+Vp*X;
SSEnew=sum(abs(Y-Yhat).^2,1);
%% Backward probagation:
Yerr=Y-Yhat;
bias1=bias1+mu*sum(Yerr);
W1=W1+mu*(Input.')*(Yerr*(X.').*Vp.*(1-Vp));
X=X+mu*(Yerr'*Vp).';
%% Update:
m=m+1;
mu=A/(B+m);
%% Calculate Sum Squared Error:
Err=abs(SSEnew-SSEold);
SSEold=SSEnew;
end
%% plot
figure1=figure;
grid on
plot(Y)
hold on
plot(Yhat)

Respuesta aceptada

Greg Heath
Greg Heath el 26 de Nov. de 2018
>> Vp = Input * W1;
Error using *
>> size(Input), size(W1)
ans =
158 4
ans =
2 15
Hope this helps
Thank you for formally accepting my answer
Greg
  1 comentario
Quang Cung
Quang Cung el 26 de Nov. de 2018
Hi Greg Heath, that was a good catch, thanks for that, but I still got a problem with the code, the result from NN does not fit well with the train data set. Here is the code after I modified, please help me to fix it. Thanks.
clear all
clc
%% Prepare data
% =========train data===========
Ib=[-pi:0.1:(4*pi)].'; % input matrix
Yb= sin(Ib); % (actual) output matrix
% ==========Normalize============
Input(:,1)=(Ib(:,1)-min(Ib(:,1)))/(max(Ib(:,1))-min(Ib(:,1)));
Y(:,1)=(Yb(:,1)-min(Yb(:,1)))/(max(Yb(:,1))-min(Yb(:,1)));
%% Neural Network:
P=size(Input,1); % number of observations
I=size(Input,2); % number of independent variables
F=size(Y,2); % number of dependent varaiables
% =========operational parameter for NN===========
Gen= 10^(6); % number of times train neural network
Epsi= 10^(-7); % tolerance
m=0; % update step size
A=1;
B=10000;
mu=A/(B+m); % step size or rate of learning
SSEold=999; % sum squared error
% ============layer================================
N=1; % number of hidden layers
H=15; % number of hidden nodes
g=0.5;
% ============Initial parameter====================
W1=rand(I+1,H);
X=g.*rand(H,F);
bias=g*rand(P,1); % bias node
bias1=g*rand(1,1); % bias weight to output layer
% =============Add bias node to Input layer========
Input=cat(2,Input,bias);
Err=10;
while (Err >Epsi && m < Gen)
%% Forward probagation:
Vp=Input*W1;
Vp=1./(1+exp(-Vp));
Yhat=bias1+Vp*X;
SSEnew=sum(abs(Y-Yhat).^2,1);
%% Backward probagation:
Yerr=Y-Yhat;
bias1=bias1+mu*sum(Yerr);
W1=W1+mu*(Input.')*(Yerr*(X.').*Vp.*(1-Vp));
X=X+mu*(Yerr.'*Vp).';
%% Update:
m=m+1;
mu=A/(B+m);
%% Calculate Sum Squared Error:
Err=abs(SSEnew-SSEold);
SSEold=SSEnew;
end
%% plot
figure1=figure;
grid on
plot(Y)
hold on
plot(Yhat)

Iniciar sesión para comentar.

Más respuestas (0)

Categorías

Más información sobre Sequence and Numeric Feature Data Workflows en Help Center y File Exchange.

Etiquetas

Productos


Versión

R2018b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by