A single hidden layered NN but the weight matrix is a diagonal matrix (with a bias) -- how to design that?
3 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
Shashi Kant
el 30 de Abr. de 2020
Comentada: Shashi Kant
el 29 de Mayo de 2020
I am wondering if I can create a single hidden layered NN but the weight matrix is a diagonal matrix (with a bias). How to design that?
---------------------------------------------------------
Background
Let's say that a single layered hidden model (with m neurons) is given as
, where
- is input vector,
- is a first weight matrix connecting input with the hidden layer with m nodes,
- is a bias
- is an activation function which operates component wise, say $\tanh$
- is second weight matrix connecting the hidden layer with the output,
- is a bias, and
- is an output vector.
My model wants the weight matrices and as only diagonal matrices. How to do that?
0 comentarios
Respuesta aceptada
Srivardhan Gadila
el 3 de Mayo de 2020
You can access the layer weights as follows:
net.LW{i,j}
You can set any values to the above weights and set the net.layerWeights{i,j}.learn to 0 so that the weights won't be altered during the training & adaption. In this case setting a specific weight for a connection is not possible since the property net.layerWeights{i,j}.learn is defined for the entire connections between layers i and j.
net.layerWeights{i,j}.learn = 0
net.LW{i,j} = ones(size(net.LW{i,j})) % any weights of size(net.LW{i,j})
If your network architecture is defined and trained already:
Then you can set weight of a connection between nodes k & l of layers i & j as follows:
net.LW{i,j}(k,l) = 1
and then use the network.
The above things can be done to Input wieghts too.
4 comentarios
Srivardhan Gadila
el 29 de Mayo de 2020
Editada: Srivardhan Gadila
el 29 de Mayo de 2020
- I listed the possible things you can do w.r.t the weights of layers of shallow nerual networks in the Answer.
- The property net.layerWeights{i,j}.learn is defined for the entire connections between layers i and j hence you cannot set the diagonal weights to learn only & non-diagonal weights to not learn.
- You can instead define custom Deep Learning layer to achieve your functionality. Refer to Define Custom Deep Learning Layers & Define Custom Deep Learning Layer with Learnable Parameters
Más respuestas (0)
Ver también
Categorías
Más información sobre Define Shallow Neural Network Architectures en Help Center y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!