I am writing my custom regression layer but my forward and backward loss function computation function needs to access some parameters. How do I track indexing of minibatches?

6 visualizaciones (últimos 30 días)
Hi,
I am writing my own custom output regression layer for my deep neural network. I have written the forward and backward loss functions (which need to access from parameters from my workspace (not a part of training data) for computation). This is similar to the folowing problem posted in questions.
I understand the two options given by the matlab staff answer i.e.,
  1. either use trainNetwork with my custom layer, or
  2. create a dlnetwork and custom training loop and define my loss function and then let dlgradient automatically calculate differentiation.
I cannot use first option because that would require me to keep track of the indices of the minibatches and I dont know how to do that in trainNetwork. I however can track the indices by designing a custom training loop and taking option 2 above. However, in taking that option I need to design my own dlgradient function, because again I would need to access those additional parameters that are not a part of training data in the calculation of backward loss funtion (gradient).
So my questions are:
  1. Can I redefine my own dlgradient function?
  2. If I take the option of designing my own custom regression layer with a custom forward and backward function, is there a way of keeping track of minibatch indices?
I ll really appreciate your guidance.
Cheers,
Sahar

Respuestas (1)

Avadhoot
Avadhoot el 16 de En. de 2024
Hi Sahar,
I understand that you are facing a problem with how to calculate minibatch indices in the "trainNetwork" function and how to structure a custom training loop that would incorporate your custom loss function. The solutions to both of your questions are as follows:
Firstly, regarding your question about writing your own "dlgradient" function, it is not possible to write a custom "dlgradient" function. However, you can define a custom loss function that includes all the additional parameters and pass it to the "dlgradient" function to compute gradients. But there are some limitations. For instance, it does not support calculating higher-order derivatives when using "dlnetwork" objects that contain custom layers with a custom backward function. Depending on your loss function, you might use "dlfeval" in conjunction with "dlgradient."
Secondly, you can keep track of your minibatch array indices if you use a custom regression layer within a custom training loop, as it provides full control over the process. However, the "trainNetwork" function handles the batching internally, so it is not directly controllable by the user. You can access other parameters like the current minibatch training loss and validation frequency, but MATLAB hides the minibatch indices from the user. By using a custom training loop, you can calculate the minibatch indices manually.
For more information about the following topics refer to the following documentation:
  1. trainingOptions: https://www.mathworks.com/help/deeplearning/ref/trainingoptions.html?s_tid=doc_ta#d126e216218
  2. OutputFcn: https://www.mathworks.com/help/deeplearning/ug/customize-output-during-deep-learning-training.html
  3. "dlfeval" and "dlgradient": https://www.mathworks.com/help/deeplearning/ug/include-automatic-differentiation.html
I hope it helps.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by