Out of memory error when using validation while training a DAGnetwork

9 visualizaciones (últimos 30 días)
Hi,
I'm training a resnet18 netwerk for semantic segmentation with my own data, based on this tutorial: https://nl.mathworks.com/help/vision/examples/semantic-segmentation-using-deep-learning.html .
The training is done on a threadripper 1950x with 64GB of RAM and a gtx1050TI 4GB (i use a minibatchsize of 4 in order for it to fit in the GPU memory). Matlab 2020a is used.
The dataset are images of size [284 481 3], with 10k images in the training dataset and ~3k images in the validation set. Both are stored in a pixelLabelImageDatastore object. When I train the network without validation, memory usage in task manager shows around 6GB used out of 64GB. So no problems there.
However, when I attempt to train the network with validation, memory usage skyrockets to 64GB, with approximately 30 minutes of swapping to disk everytime the validation step happens. After a couple epochs, MATLAB then throws the 'out of memory' error. I even increased the size of the swap file in Windows to 500GB with the same results. It only takes a couple of extra epochs before crashing.
What is the reason this is happing only with the validation data and what can be done to counteract this? I believed that using a datastore made it so that only the data required at a certain moment was read into memory instead of the entire batch? The total file size of all the labeled and input images in my dataset is only ~300MB.
Thanks for any feedback!

Respuesta aceptada

Anthony Schenck
Anthony Schenck el 13 de Mayo de 2022
This issue appears to be fixed in Matlab 2022a.

Más respuestas (1)

Harsha Priya Daggubati
Harsha Priya Daggubati el 31 de Jul. de 2020
Hi,
This is the result of a bug, that leads to GPU's running out of memory as the validation set size grows. This issue is not a result of the increased training set size.
One workaround is to train by splitting the training set into smaller groups (and thus breaking up the validation set), and looping through each of the groups, and having each successive group pick up where the prior group left off.
Another workaround is to reduce the size of the validation set until the GPU does not run out of memory, and continue training on the full training set.
  7 comentarios
Anthony Schenck
Anthony Schenck el 8 de Nov. de 2021
Again, the issue I'm having is NOT related to GPU memory.
It's the system memory, RAM, that is being filled to the max, to the point that 64GB of system memory is insuficient to train a resnet18 network with validation enabled.
Jari Manni
Jari Manni el 23 de Feb. de 2022
I have run into the same issue as well, when doing sementic segmentation experiments. I have a validation dataset of 43,034 images, and MATLAB attempts to request ~2TB of RAM and obviously, failed.
MATLAB: R2021a
Caused by:
Error using zeros
Requested 320x4096x10x43034 (2101.3GB) array exceeds maximum array size preference (251.6GB). This might cause MATLAB to become unresponsive.'
From the dimension of the requested array, I could understand that this RAM buffer is perhaps used to store data in order to calculate some mean statistics across the entire validation set (Maybe Mean accuracy/Loss? not sure)
Anyway, this is a very big issue and obviously needs some action from the crew. Reducing the validation set size is a compromise, NOT A FIX!

Iniciar sesión para comentar.

Categorías

Más información sobre Image Data Workflows en Help Center y File Exchange.

Productos


Versión

R2020a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by