Self Organizing Map training question

Hi,
I have a difficult question about using Matlab's neural network toolbox. I would like to train a SOM neural network with a data set; however, my data set is quite large. Because of this, I need to split the data into sections and train it individually. Here's my code now
%%Combination method
%IN THIS EXAMPLE - ITS POSSIBLE BECAUSE ITS A SMALL DATASET. IT IS NOT POSSIBLE FOR MY ACTUAL DATA
%Load and combine the data
data1 = [1:10:400;1:20:800]';
data2 = [400:1:440;800:1:840]';
combined = [data1;data2]';
% Create a Self-Organizing Map
dimension1 = 5;
dimension2 = 5;
net = selforgmap([dimension1 dimension2]);
% Train the Network
[net,tr] = train(net,combined);
%Plot combined results
plotsomhits(net,combined);
plotsomhits(net,data1');
plotsomhits(net,data2');
%%Iterative METHOD
%This is what I actually want to use to train the network
% Create a Self-Organizing Map
dimension1 = 5;
dimension2 = 5;
net = selforgmap([dimension1 dimension2]);
% Train the Network
data1 = [1:10:400;1:20:800]';
[net,tr] = train(net,data1');
data2 = [400:1:440;800:1:840]';
[net,tr] = train(net,data2');
% View the Network
combined = [data1;data2]';
plotsomhits(net,combined);
plotsomhits(net,data1');
plotsomhits(net,data2');
As you can tell - the results are skewed significantly because the data is trained twice. Is there anyway to limit the bias when you are training the second time?

5 comentarios

Simon Nunn
Simon Nunn el 13 de Jul. de 2017
Editada: Simon Nunn el 13 de Jul. de 2017
I am also having a similar issue to this.
First, I can see that the redefinition of the network with the second use of selforgmap() will completely discard any training performed by the first call to train().
However removing this redefinition of your network is not sufficient to solve the issue.
Having taken the time to read the manual I also experimented with the use of adapt() instead of train() this is supposed to perform one step of the training process for the purpose of not batch processing data, which works fine with NARX and other feedforward MLP, but with SOM it seems to reset the network in the same way that train() does.
Digging even deeper I started to experiment with calling the learning function learnsomb() directly with the intend of manually applying the delta weight matrix that it returns, but I've struggled to find a suitable input for A.
After some reverse engineering of the MATLAB code I've finally run out of steam and I've come here to find answers.
Here the code I have so far:
% lifted from debugging MATLAB code
% feval(learnFcn,net.IW{i,j}, ...
% PD{i,j,ts},
% IWZ{i,j},
% N{i},
% Ac{i,ts+numLayerDelays},
% t,
% e,
% gIW{i,j},
% gA{i},
% net.layers{i}.distances,
% net.inputWeights{i,j}.learnParam,
% IWLS{i,j});
w = net.IW;
d = net.layers{1}.distances;
a = net(P); % this is not right :(
LP = net.inputWeights{1}.learnParam;
[dW,ls] = learnsomb(w,P,[],[],a,[],[],[],[],d,LP,[]);
negar BAIBORDI
negar BAIBORDI el 29 de Jun. de 2023
Movida: DGM el 29 de Jun. de 2023

Hello I have difficulty to understand sample hits plot. I want to know each neuron related to which of the data’s that I imported to self organizing map? Please help me.

negar BAIBORDI
negar BAIBORDI el 29 de Jun. de 2023
Movida: DGM el 29 de Jun. de 2023

I couldn’t analyze this plot. Please guide me

DGM
DGM el 29 de Jun. de 2023
What plot?
Everybody else in this thread has been inactive for years. If you want to ask a question, ask a clear and specific question. Don't hide a tangent in a random dead thread somewhere and expect people to find it and guess what you want.
negar BAIBORDI
negar BAIBORDI el 30 de Jun. de 2023
Hello I have difficulty to understand sample hits plot. I want to know each neuron related to which of the data’s that I imported to self organizing map toolbox in matlab? Please help me. I attchesd the the picture to make it clear.
Best regards

Iniciar sesión para comentar.

 Respuesta aceptada

Greg Heath
Greg Heath el 13 de Jul. de 2013

0 votos

This is a well known NN training phenomenon simply referred to as forgetting. See comp.ai.neural-nets posts and FAQ.
The only way to mitigate forgetting is to make sure that the salient characteristics of the 1st training set are reinforced during the later learning. Typically, these characteristics are represented by a subset of first set samples or cluster centers.
Hope this helps.
Thank you for formally accepting my answer
Greg

1 comentario

Darin McCoy
Darin McCoy el 15 de Jul. de 2013
Thanks Greg,
For others that struggle with this issue - I recommend reading this white paper...its pretty good!

Iniciar sesión para comentar.

Más respuestas (0)

Categorías

Más información sobre Deep Learning Toolbox en Centro de ayuda y File Exchange.

Preguntada:

el 12 de Jul. de 2013

Comentada:

el 30 de Jun. de 2023

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by