Why doesn't my CNN fit into the memory of the GPU?

1 visualización (últimos 30 días)
Juuso Korhonen
Juuso Korhonen el 13 de Mzo. de 2021
Comentada: Joss Knight el 15 de Mzo. de 2021
Hi,
I tried to inflate the vgg16 to 3D using this function: https://se.mathworks.com/matlabcentral/fileexchange/87594-3d-convolutional-neural-network?s_tid=srchtitle . It worked fine (at least it didn't list any errors when using analyzeNetwork()), but when I try to train the network as batch job using GPU, it displays:
Error using parallel.Job/fetchOutputs (line 1264)
An error occurred during execution of Task with ID 1.
Caused by:
Error using trainNetwork (line 183)
Maximum variable size allowed on the device is exceeded.
Error using nnet.internal.cnngpu.convolveForwardND
Maximum variable size allowed on the device is exceeded.
However, the GPU used is a tesla v100 with a 32GB of RAM. And if I use the command whos lgraph to read the size of the network, it only gives:
whos lgraph
Name Size Bytes Class Attributes
lgraph 1x1 176553068 nnet.cnn.LayerGraph
But how come it is said that 2D vgg16 is already 500 MB, but my 3D vgg16 sits around at only 180 MB? How do I calculate the actual memory size of the network and what is needed at the GPU? My images (volumes) are of the size:
whos v
Name Size Bytes Class Attributes
v 224x224x224 44957696 single
And my minibatchsize is 10.

Respuesta aceptada

Joss Knight
Joss Knight el 14 de Mzo. de 2021
Editada: Joss Knight el 14 de Mzo. de 2021
VGG16 is a 1GB model, if you inflate it to 3-D you're going to have very serious memory pressure. More to the point, the error you are getting is saying that the number of elements in some array is greater than the maximum allowed by gpuArray, which is 2147483647. The output of the first convolutional layer of VGG16 for a batch size of 10 is 224x224x64x10. If you trivially extend that to 3-D then the output will be 224x224x224x64x10, which is 7193231360 and so 3x bigger than the largest allowed variable size. If you reduced your batch size to 3 that would be possible - but then every activation in your network would still be 8GB in size and you would run out of memory pretty quickly.
  2 comentarios
Juuso Korhonen
Juuso Korhonen el 15 de Mzo. de 2021
Thanks for the answer. Yeah of course, I should've thought of that. So vgg16 isn't really suitable for 3D without heavy reduction in input size (or even further modifications to the architecture). Resnet will be my second choice probably, but I liked the high resolution feature maps in the end of vgg16 because I'm going to use Grad-CAM. Do you have any suggestions for a suitable network? Maybe shallow resnet18?
Joss Knight
Joss Knight el 15 de Mzo. de 2021
Sorry, this isn't my area of expertise. We have an example of 3-D semantic segmentation in our documentation that uses a U-net architecture. Otherwise, you may have to read around in the Deep Learning community, or ask that specific question to MATLAB Answers.

Iniciar sesión para comentar.

Más respuestas (0)

Categorías

Más información sobre Parallel and Cloud en Help Center y File Exchange.

Productos


Versión

R2020b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by