Use multiple GPUs for functions

1 visualización (últimos 30 días)
Mantas Vaitonis
Mantas Vaitonis el 2 de Oct. de 2018
Editada: Matt J el 3 de Oct. de 2018
Dear All, I had my previous question, because it was not clear, now I tried to simplify it. At my disposal there are two GPU devices (GeForce GTX 1070 Ti and GeForce GTX 1060 6GB). I would like to parallelize my calculations on both GPUs. Lets say I have 3D gpuArray and I would like to pass this data in chunks to both GPUs, just in my code function is more difficult, this is an example of what I am trying to achieve, and yes it does not work.
clear;
delete(gcp('nocreate'));
nGPUs = gpuDeviceCount();
parpool('local', nGPUs);
d1=rand(10,10,10);
d=gpuArray(d1);
parfor i =1:nGPUs
c1 = zeros(10,10,10);
c=gpuArray(c1);
for j=1:10
c(:,:,j)=d(:,:,j)*2;
end
end
der=c;
It gives temporary variable error.

Respuesta aceptada

Matt J
Matt J el 2 de Oct. de 2018
Editada: Matt J el 2 de Oct. de 2018
Is the question then, why do you get the temporary variable error? The reason is because the variable 'c' is created inside the parfor-loop. It is therefore a temporary variable, meaning that it has no life after the parfor loop. It is both forbidden and illogical to use a temporary variable after the for-loop as you have done at the line,
der=c;
This is because the parfor loop maintains several parallel versions of c. Every parallel worker has its own version of c which might end up carrying a different value at the end of the loop, depending on the parallel operations done to it. So, which of these versions would be assigned to der?
  7 comentarios
Mantas Vaitonis
Mantas Vaitonis el 3 de Oct. de 2018
Yes I do understand that these calculations are done in parallel and that same for loop for j=1:10 is processed on both GPUs. But what would be the way if my variable d1=rand(1e8,1e8,1e8); and for j=1:1e8, but divide this for loop for both GPUs, that one is from j=1:5e7 and other GPU j=5e7:1e8, or this is not suitable for GPU? I am able to pass all data to one GPU, but if I pass it to two GPU it should result in data process speedup.
Matt J
Matt J el 3 de Oct. de 2018
Editada: Matt J el 3 de Oct. de 2018
One way,
d1Cell={d1(:,:,1:5e7), d1(:,:,5e7+1:end)};
parfor i = 1:nGPUs
gpuDevice(i);
c=gpuArray(c1{i});
d=gpuArray(d1Cell{i});
for j=1:size(d,3)
c(:,:,j)=d(:,:,j)+i-j; %A fake i-dependent operation.
end
c1{i}=gather(c);
end

Iniciar sesión para comentar.

Más respuestas (0)

Categorías

Más información sobre GPU Computing en Help Center y File Exchange.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by