Freeing GPU memory associated with CUDA kernels

5 visualizaciones (últimos 30 días)
Jeremy Dillon
Jeremy Dillon el 3 de En. de 2013
I have some MATLAB code that consists of 5 CUDA kernels followed by further processing using MATLAB functions (FFT, etc.). Kernels 3 and 4 are executed on the order of 10 times inside a MATLAB for loop (the algorithm is inherently sequential). The CUDA kernels produce a 100 MB gpuArray. On a GTX 560 Ti with 1 GB of memory, I was getting out of memory errors after CUDA kernel execution despite clearing every gpuArray except the one needed for further processing. The "solution" was to also clear the parallel.gpu.CUDAKernel variables. This freed hundreds of MB on the GPU and permitted further processing on the GTX 560 Ti. I have to re-create the CUDAKernels for each iteration, but this doesn't seem to take much time.
Is there any other way to release GPU memory associated with parallel.gpu.CUDAKernel objects?
P.S. The problem has already been decomposed into smaller pieces to limit memory consumption. Roughly 1 GB of raw data is passed through the GPU in chunks of 100 MB. P.P.S. The code ran fine on a GTX 660M with 2 GB of memory.

Respuestas (3)

Jason Ross
Jason Ross el 3 de En. de 2013
  1 comentario
Jeremy Dillon
Jeremy Dillon el 3 de En. de 2013
Thanks for the suggestion, but reset clears the gpuArray that I'm trying to process. I suppose I could gather the data on the CPU, reset the GPU, and transfer the data back to the GPU, but that seems inefficient.

Iniciar sesión para comentar.


Narfi
Narfi el 29 de En. de 2013
Jeremy,
How large is the PTX for the kernels you are loading via the CUDAKernel interface?
The CUDAKernel object in MATLAB doesn't allocate any extra memory on the GPU beyond what is strictly required in order to store the kernel assembly in GPU memory, so this would indicate you have very large kernels. If that's the case, the workaround you have found is the only one I can think of.
Best,
Narfi
  1 comentario
Jeremy Dillon
Jeremy Dillon el 29 de En. de 2013
Hi Narfi,
There are five CUDA kernels with PTX files ranging in size from 6 to 54 kB. These do not seem very large to me, do you agree? It's a relatively small amount of code but the kernels do process a large quantity of data.
Regards,
Jeremy

Iniciar sesión para comentar.


Narfi
Narfi el 29 de En. de 2013
Jeremy,
In that case, I am puzzled as to what is going on. I would recommend you contact technical support and try to give them reproduction steps. At the very minimum they will need the instrumentation of the memory usage in order to figure out what is going on.
Best,
Narfi

Categorías

Más información sobre GPU Computing en Help Center y File Exchange.

Etiquetas

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by