photo

Samuel Somuyiwa

Con actividad desde 2021

Followers: 0   Following: 0

Mensaje

Estadística

  • Knowledgeable Level 2
  • First Answer

Ver insignias

Feeds

Ver por

Respondida
Weight Tying for Layers in a CNN model
See attached weightTyingAutoEncoder layer example. The layer follows on from the example in the link that Sanjana shared earlier...

6 meses hace | 1

Respondida
How can I reassemble 'patch embedded' data back into original data structure in Vision Transformer on DeepNetworkDesigner?
Assuming you are using the Vision Transformer model as a backbone/encoder, you can obtain the output embedding from the last blo...

más de 1 año hace | 0

| aceptada

Respondida
How to create an attention layer for deep learning networks?
You can create an attention layer as a custom layer, similar to spatialDropoutLayer in the example you are using in your current...

más de 2 años hace | 0

| aceptada

Respondida
Why the results are different by using trainNetwork and custom training loop?
The RMSE in the training plot of trainNetwork does not include the factor of half, whereas in the custom training loop you used ...

más de 2 años hace | 1

| aceptada

Respondida
LSTM Example for Multi input and Multi outputs
You can train a multi-output LSTM network using a custom training loop. Here is an example of how to train a network with multip...

más de 3 años hace | 0