How to distribute computation on GPU vector-wise?
6 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
Hi,
I am trying to accelerate a specific funtion by assigning each row of a matrix to one GPU core and have that core processing that row and returning a new matrix. Lets say my input matrix is n by m, I want the computation to be distributed on n cores, while each of the n cores returns a matrix of the size k by m. The computation applied to each row is quite complicated, but only functions supported by the GPU are required.
As I understand this, arrayfun can only be used for single element operations, not arrays. The individual elements in one row of the input matrix, however, cannot be computed individually. I think pagefun and bsxfun also won't work, because they do not support self written functions. Is there any way to proceed like this in Matlab without the need to implement the entire code in cuda?
Thanks!
0 comentarios
Respuestas (2)
Joss Knight
el 20 de Abr. de 2017
You can loop over and read multiple entries in an input array (as an up-value variable) inside arrayfun, but you can't loop over and assign to elements of an output array. There is no general way to do this in MATLAB code.
Your best bet is to tell us what you're trying to do and and we can how a combination of vectorized MATLAB functions and possible use of pagefun can give you what you want without you having to write custom CUDA.
0 comentarios
Hans-Martin Schwab
el 20 de Abr. de 2017
Editada: Hans-Martin Schwab
el 20 de Abr. de 2017
0 comentarios
Ver también
Categorías
Más información sobre GPU Computing en Help Center y File Exchange.
Productos
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!