Load the dlnetwork
object and class names from the MAT file dlnetDigits.mat
.
Accelerate the model loss function modelLoss
listed at the end of the example.
Clear any previously cached traces of the accelerated function using the clearCache
function.
View the properties of the accelerated function. Because the cache is empty, the Occupancy
property is 0.
accfun =
AcceleratedFunction with properties:
Function: @modelLoss
Enabled: 1
CacheSize: 50
HitRate: 0
Occupancy: 0
CheckMode: 'none'
CheckTolerance: 1.0000e-04
The returned AcceleratedFunction
object stores the traces of underlying function calls and reuses the cached result when the same input pattern reoccurs. To use the accelerated function in a custom training loop, replace calls to the model gradients function with calls to the accelerated function. You can invoke the accelerated function as you would invoke the underlying function. Note that the accelerated function is not a function handle.
Evaluate the accelerated model gradients function with random data using the dlfeval
function.
View the Occupancy
property of the accelerated function. Because the function has been evaluated, the cache is nonempty.
Clear the cache using the clearCache
function.
View the Occupancy
property of the accelerated function. Because the cache has been cleared, the cache is empty.
Model Loss Function
The modelLoss
function takes a dlnetwork
object net
, a mini-batch of input data X
with corresponding target labels T
and returns the loss, the gradients of the loss with respect to the learnable parameters in net
, and the network state. To compute the gradients, use the dlgradient
function.