Using the GPU through the parallel comp. toolbox to optimize matrix inversion
9 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
Hello,
In my code I impliment:
A=B\c;
B is an invertable matrix (14400x14400), c is a vector.
Since B is large this takes very long. I am using a windows 7, 64bit, 4 GB RAM, 2 cores.
Would I be able to shorten the time by using the GPU through the parallel comp. toolbox?
Would I need to write my own version of parallel matrix inversion, or has this problem been faced before and there is a posted solution?
Thank you.
0 comentarios
Respuesta aceptada
Jill Reese
el 2 de Oct. de 2012
As of MATLAB R2010b this functionality has been available on the GPU in the Parallel Computing Toolbox. In order to use it, at least one of your variables B and c must be transferred to the GPU before calling mldivide (\).
In the latest release of MATLAB (R2012b) you can use this code to solve your problem:
B=gpuArray(B); % transfer B to the GPU
c=gpuArray(c); % transfer c to the GPU
A = B \ c; % Solve the linear system on the GPU and store A on the GPU
% This line of your existing code doesn't change at all
% you can continue to perform work on the GPU using A, B, and c or
A=gather(A); % transfer A back to the MATLAB workspace
0 comentarios
Ver también
Categorías
Más información sobre GPU Computing en Help Center y File Exchange.
Productos
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!