Direct GPU-to-GPU Communication with Parallel Computing Toolbox / SPMD
4 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
I am using spmd to enable parallel computing with multiple GPUs on one workstation. Basically, the GPUs do some calculation, broadcast their results, update their parameters, and iterate. The problem is, using labSend (actually, gplus in my case) to aggregate and broadcast the results is pretty slow. It is first pulling the results off of the GPU, copying to system memory, sending to other workers, then uploading to the other GPUs.
I understand that CUDA now has Peer-to-Peer memory access capability. This way, multiple-GPUs can directly access each other's memory. http://www.nvidia.com/docs/IO/116711/sc11-multi-gpu.pdf This is accomplished with a function like: cudaMemcpyPeerAsync().
Thus, I would like to have a gplus() or labSend() that copies a gpuArray directly to the memory of another GPU on another worker.
Is this possible today? If not, is it something you are working on?
Thanks, Jon
0 comentarios
Respuestas (1)
Edric Ellis
el 27 de Abr. de 2015
Editada: Edric Ellis
el 27 de Abr. de 2015
Unfortunately, as you observe, Parallel Computing Toolbox currently has no means by which to achieve this. I believe you can use the peer-to-peer memory copying across multiple processes within a single node, which means you could use the GPU MEX interface to copy data.
1 comentario
Ver también
Categorías
Más información sobre GPU Computing en Help Center y File Exchange.
Productos
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!