GPU parallel computing error with Quadro RTX 5000
1 visualización (últimos 30 días)
Mostrar comentarios más antiguos
Xinbing Liu
el 31 de Mzo. de 2019
Comentada: Walter Roberson
el 5 de Abr. de 2019
Hello,
I've been using a Tesla GPU card with 6GB RAM with R2012a for a number of years and it has worked well. But now I need to work on larger matrix sizes (e.g., 8192x8192 fft2) the good old Tesla card can't handle anymore. So recently we bought a new Quadro RTX 5000 with 16GB RAM. The 10.1 NVidia driver installed successfully:
Then I ran the following test code (gputest.m):
M = 4095;
a = rand(M); %Create M*M matrix of random numbers
b = gpuArray(a); %Send the array to GPU memory
c = 2*b; %Perform some operation
It runs sucessfully. But when I change M to 4096:
M = 4096;
a = rand(M); %Create M*M matrix of random numbers
b = gpuArray(a); %Send the array to GPU memory
c = 2*b; %Perform some operation
>> gputest
Error using parallel.gpu.GPUArray/mtimes
An unexpected error occurred trying to launch a kernel. The CUDA error
was: CUDA_ERROR_INVALID_VALUE.
Error in gputest (line 8)
c = 2*b; %Perform some operation
It seems the maximum matrix size the new Quadro RTX 5000 card can handle is 4095x4095. I can transfer larger matrices to GPU (b = gpuArray(a) with M > 4095 doesn't give errors), just can't perform any operations on them. Any ideas? Are there some settings for the GPU driver I need to change?
Thank you.
4 comentarios
Respuesta aceptada
Joss Knight
el 4 de Abr. de 2019
I'm amazed this works at all, frankly, since 2012 is about 5 generations of GPU architecture ago. Anyway, there are known issues with Turing cards and the PTX JIT compilation path. I'm afraid to say, you're going to have to upgrade MATLAB for this to work.
4 comentarios
Joss Knight
el 5 de Abr. de 2019
You can't even get a 'Tesla' card any more, not from NVIDIA. They were discontinued years ago. And NVIDIA don't support them or the generation after (Fermi).
Your best best is to buy a Kepler card, e.g. Tesla K40 (they started naming the entire series Tesla after the first lot of cards that supported double precision arithmetic). That should still work with R2012a. Probably.
Walter Roberson
el 5 de Abr. de 2019
Joss is distinguishing above between the "Tesla" architecture, and the "Tesla" brand. NVIDIA markets their non-graphics cards under "Tesla" brand and under "Quattro" brand, but these days the architecture of a Tesla branded card might be even Volta.
Más respuestas (1)
Ver también
Categorías
Más información sobre Loops and Conditional Statements en Help Center y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!