GPU Out of memory on device.

70 visualizaciones (últimos 30 días)
caesar
caesar el 16 de Mzo. de 2018
Comentada: Thyagharajan K K el 28 de Nov. de 2021
I am using the neural network toolbox for deep learning and I have this chronical problem when I am doing a classification. My DNN model has trained already and I keep receiving the same error during classification despite the fact that I used an HPC (cluster) that has Nvidia GeForce 1080, and my machine that has GeForce 1080Ti. the error is :
Error using nnet.internal.cnngpu.convolveForward2D Out of memory on device. To view more detail about available memory on the GPU, use 'gpuDevice()'. If the problem persists, reset the GPU by calling 'gpuDevice(1)'.
Error in nnet.internal.cnn.layer.util.Convolution2DGPUStrategy/forward (line 14)
Error in nnet.internal.cnn.layer.Convolution2D/doForward (line 332)
Error in nnet.internal.cnn.layer.Convolution2D/forwardNormal (line 278)
Error in nnet.internal.cnn.layer.Convolution2D/predict (line 124)
Error in nnet.internal.cnn.DAGNetwork/forwardPropagationWithPredict (line 236)
Error in nnet.internal.cnn.DAGNetwork/predict (line 317)
Error in DAGNetwork/predict (line 426)
Error in DAGNetwork/classify (line 490)
Error in Guisti_test_script (line 56)
parallel:gpu:array:OOM
Has anyone faced the same problem before?
ps: my test data contains 15000 images.
  1 comentario
Thyagharajan K K
Thyagharajan K K el 28 de Nov. de 2021
I had a similar problem. The main reason is due to the large number of learnable parameters. You can reduce the number of nodes in the fully connected network or you can reduce the size of the layer available just before the fully connected layer by incresing the stride value or you can reduce both.

Iniciar sesión para comentar.

Respuesta aceptada

Joss Knight
Joss Knight el 17 de Mzo. de 2018
Reduce the 'MiniBatchSize' option to classify.
  2 comentarios
caesar
caesar el 17 de Mzo. de 2018
well, the model I am trying to use has been already trained so how can I reduce the miniBatchsize? should I retrain the model on reduced MiniBatchSize in order to be able to do classification?

Iniciar sesión para comentar.

Más respuestas (3)

Khalid Labib
Khalid Labib el 19 de Feb. de 2020
Editada: Khalid Labib el 13 de Mayo de 2020
In "Single Image Super-Resolution Using Deep Learning" MatLab demonstration:
I tried clear my gpu memory ( gpuDevice(1) ) after each iteration and changed MiniBatchSize to 1 in "superResolutionMetrics" helper function, as shown in the following line, but they did not work (error: gpu out of memory):
residualImage =activations(net, Iy, 41, 'MiniBatchSize', 1);
1) To solve this problem you might use CPU instead:
residualImage =activations(net, Iy, 41, 'ExecutionEnvironment', 'cpu');
I think this problem is caused by the high resolution of the test images, e.g. the second image "car2.jpg", which is 3504 x 2336.
2) A better solution is to use GPU for low resolution images, and CPU for high resoultion images by replacing "residualImage =activations(net, Iy, 41)" with:
sx=size(I);
if sx(1)>1000 || sx(2)>1000 %try lower values if it does not work e.g: if sx(1)>500 || sx(2)>500
residualImage =activations(net, Iy, 41, 'ExecutionEnvironment', 'cpu');
else
residualImage =activations(net, Iy, 41);
end
3) The most efficient solution is to divide the image into smaller images (non-overlapping blocks or tiles), such that each small image has a size of 1024 or less in any of its dimension based on your GPU. So, you can use GPU for each of these small images without errors.
Then, apply your CNN on these small images using GPU. After that, you can combine the small images to form the size of original image.
  1 comentario
Rui Ma
Rui Ma el 22 de Abr. de 2020
Editada: Rui Ma el 22 de Abr. de 2020
Thanks! It works! Although a little bit slow

Iniciar sesión para comentar.


marie chevalier
marie chevalier el 4 de Jun. de 2019
Editada: marie chevalier el 4 de Jun. de 2019
Hi,
I have a similar issue here, and the link given by Joss doesn't really help me to understand how to fix it.
I am working on the "Single Image Super-Resolution Using Deep Learning" MatLab demonstration.
I would like to use the pretrained network on my own images.
I get a similar error message when arriving at the line:
Iresidual = activations(net,Iy_bicubic,41);
I tried using the command line gpuDevice(1) and it didn't do anything.
I also tried changing the MiniBatchSize to 32 instead of the default 128 and got the same error.
Does anyone understand how to fix this problem?
  3 comentarios
marie chevalier
marie chevalier el 26 de Jun. de 2019
It still doesn't work. I'm afraid this is due to something else.
I'm out of ideas at the moment, I did a little cleanup around my computer just to be safe but it didn't change much.
I'll try re-downloading the example again, maybe I changed something in it without noticing.
Akash Tadwai
Akash Tadwai el 17 de Dic. de 2019
@Joss Knight, It still doesn't work in my case. I was training alex net with a mini batch size of 1 but still MATLAB is giving the same error.
Alexnet

Iniciar sesión para comentar.


Alvaro Lopez Anaya
Alvaro Lopez Anaya el 7 de Nov. de 2019
In my case, I had similar problems, despite of the fact that I have a gtx1080Ti.
As Joss said, reducing the MiniBatchSize solved my problem. It's all about the training options.

Categorías

Más información sobre Parallel and Cloud en Help Center y File Exchange.

Etiquetas

Productos

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by