Custom transfer function much slower than standard transfer functions

Hi,
I needed to define a custom transfer function for a custom neural network I am building, more specifically a negative exponetial transfer function a = exp(n).
I followed the instructions of the documentation and it does work, in principle. The problem is that it is A LOT slower than the standard transfer functions.
I started investigating this and put breakpoints in the code of other transfer funcions (in /usr/local/MATLAB/R2012b/toolbox/nnet/nntransfer). The debugger does not break at these points. It seems like the neural network toolbox is using some compiled versions of the transfer functions.
How can I do the same with my custom transfer function?
The speed at which it is currently working makes it unusable for me.
Thanks for your help, it is very much appreciated! Rico

Respuestas (1)

Greg Heath
Greg Heath el 19 de Dic. de 2013
Editada: Greg Heath el 19 de Dic. de 2013
Just modify the code for radbas.
type radbas
Thank you for formally accepting my answer.
Greg

4 comentarios

Have you even read my question?
Greg Heath
Greg Heath el 19 de Dic. de 2013
Editada: Greg Heath el 19 de Dic. de 2013
Yes. However, it is not clear if you have ever modified a working code instead of writing your own.
Okay. yes, I have modified a working code, renamed it and, as I wrote, it does seem to work - but very slowly. What I don't understand is why it is that slow.
Especially the following is odd to me: When I put a breakpoint in the standard implementation of one of the default transfer functions (like tansig) and train a neural net using this function, the debugger does not stop at this point (as if it was never executed). However if I use my modified transfer function it does stop at the breakpoint.
Why is that?
Sorry for bringing up again such an old thread, but an actual answer would be really helpful.
I have two neural networks doing roughly the same (same input-output structure, solving a regression task). One uses built-in transfer functions and has about 250 layers (network 1). The other one uses some custom transfer functions and has about 60 layers (network 2). I created the custom function by copying the framework around an existing transfer function to the working directory and adapted names & functions.
Network 2 fits the data much, much better. Hence I want to use this network. However, a call to sim(net, ...) takes about 7 (!) times longer on network 2 than on network 1. Why is this the case and how can I make network 2 faster?

Iniciar sesión para comentar.

Categorías

Más información sobre Deep Learning Toolbox en Centro de ayuda y File Exchange.

Preguntada:

el 17 de Dic. de 2013

Comentada:

el 17 de Feb. de 2021

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by