How can i optimise weights for BPN using genetic algorithm?

I am working on prediction of rainfall using BPN. I have read that in order to get better convergence, optimisation method like Genetic Algorithm can be applied. How do we use the tool box especially when the weights are to be optimised?

1 comentario

Kelvin
Kelvin el 21 de Mzo. de 2013
Editada: Kelvin el 21 de Mzo. de 2013
m=5; %number of nodes in input layer
n=10; %number of nodes in hidden layer
o=1; %number of nodes in output layer
xy=load('2010-2011-2012.txt'); %loading data
mx=max(xy) %normalising the data
x=xy/mx;
a=0;
b=1;
w1=a+(b-a)*rand(m,n) %weight from input to hidden layer
w2=a+(b-a)*rand(n,o) %weight from hidden to output layer
w1ini=w1;
w2ini=w2;
theta1=a+(b-a)*rand(1,n) %bias
theta2=a+(b-a)*rand(1,o)
theta1ini=theta1;
theta2ini=theta2;
deltheta1=a+(b-a)*rand(1,n);
deltheta2=a+(b-a)*rand(1,o);
yh=zeros(1,n);
yo=zeros(1,o);
deltah=zeros(1,o);
deltao=zeros(1,n);
delw1=a+(b-a)*rand(m,n);
delw2=a+(b-a)*rand(n,o);
alpha=0.7; %learning rate
etta=0.95; %momentum factor
z=0;
t=[];
er=[];
while (z<1500)
c=0;
while(c<1091)
for j=1:n
sum=0;
for i=1:m
sum=sum+x(i+c)*w1(i,j);
end
yh(j)=logsig(sum+theta1(j)); %output at hidden layer
end
for k=1:o
sum=0;
for j=1:n
sum=sum+yh(j)*w2(j,k);
end
yo(k)=logsig(sum+theta2(k)); %output at output layer
end
err=x(c+m+1)-yo(o);
t(c+1)=err;
oldw1=w1;
oldw2=w2;
%using gradient descent
deltao(o)=err*(1-yo(o))*yo(o);
for k=1:o
for j=1:n
delw2(j,k)=alpha*yh(j)*deltao(k)+etta*(w2(j,k)-oldw2(j,k));
deltheta2=alpha*1*deltao(o);
end
end
for j=1:n
sum=0;
for k=1:o
sum=sum+deltao(k)*w2(j,k);
end
deltah(j)=yh(j)*(1-yh(j))*sum;
end
for j=1:n
for i=1:m
delw1(i,j)=alpha*deltah(j)*x(i+c)+etta*(w1(i,j)-oldw1(i,j));
deltheta1=alpha*1*deltah(j);
end
end
w1new=delw1+w1;
w2new=delw2+w2;
theta1new=deltheta1+theta1;
theta2new=deltheta2+theta2;
w1=w1new;
w2=w2new;
theta1=theta1new;
theta2=theta2new;
c=c+1;
end
sum=0;
for i=1:1091
sum=sum+t(i)*t(i);
end
mse=sum/1091
er(z+1)=mse;
z=z+1;
end

Iniciar sesión para comentar.

 Respuesta aceptada

The value MSE = 0.0054 means absolutely nothing to me because I don't know the scale of the target data.
On the other hand, the normalized values NMSEtrn, NMSEval and NMSEtst instantly tell me whether or not I think the design is acceptable.
ytrn00 = mean(ttrn,2) is the output of the NAIVE CONSTANT OUTPUT MODEL. Using that output for an arbitrary input from the same probability distribution yields the reference values
MSEtrn00 = mse(ttrn-ytrn00) = mean(var(ttrn',1))
MSEval00 = mse(tval-ytrn00)
MSEtst00 = mse(ttst-ytrn00)
the normalized values and corresponding Rsquared (R^2, coefficient of determination ... See Wikipedia) values are
NMSEtrn = MSEtrn/MSEtrn00
NMSEval = MSEval/MSEval00
NMSEtst = MSEtst/MSEtst00
and
R2trn = 1 - NMSEtrn
R2val = 1 - NMSEval
R2tst = 1 - NMSEtst
R^2 can be interpreted as the fraction of target variance that is modeled (AKA "explained") by the net.
Exactly what gradient-descent disadvantages do you proclaim to have overcome? Did you obtain any gd designs for comparison (especially training times)?
Hope this helps
Greg

3 comentarios

Sorry for being so negative.
Two important things that are missing from the NNTBX are
1. Cross-validation
2. Genetic Training
Greg
The value ranges from 0 to 143 and I have normalised it between 0 and 1. While training I get an MSE of 0.0054 and while testing 0.0153.
The value of what? Normalized it how?
What are R2trn, R2val and R2tst?

Iniciar sesión para comentar.

Más respuestas (1)

Greg Heath
Greg Heath el 21 de Mzo. de 2013
I have read that ... Where? ... The internet?
Please clarify. Is your prediction problem static (simultaneous input and output)or dynamic ( input and/or feedback delays are involved).
The genetic algorithm is most useful in determining optimum nonstandard network node topology. As examples, nonuniform (sporadic?) input, feedback, skiplayer and output connections. However, it is too slow for estimating weights given a fixed topology like those represented by the various MATLAB functions.
What MATLAB design functions have you tried for comparison ?
Greg

1 comentario

http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6160092&tag=1 In the paper attached its said that genetic algorithm and simulated annealing can be used to overcome the disadvantages of BPN. The prediction that I am using is static. I was able to train the network with the past data and also test it obtaining an mse of 0.0054. I want to bring down the error and also increase the convergence rate.

Iniciar sesión para comentar.

Preguntada:

el 20 de Mzo. de 2013

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by