fsolve and GPU Computation

13 visualizaciones (últimos 30 días)
Sven
Sven el 16 de Sept. de 2017
Editada: Matt J el 17 de Sept. de 2017
Can fsolve be used using GPU computation? Or can it internally profit from GPU computation?
If not, is it known if it will be made possible to use fsolve with GPU computation?
Is it automatically using GPU on certain conditions?
Or is there a way I could modify fsolve to be able to use it on a GPU?
I am asking because I have multidimensional equations with substantial grids that should supposedly be highly parallelizable. I could try to parallelize within the function which is to be solved, but I suppose it should be more efficient if GPU usage can be established at the fsolve level.
Thank you in advance for any advice.

Respuestas (1)

Matt J
Matt J el 16 de Sept. de 2017
Editada: Matt J el 16 de Sept. de 2017
I could try to parallelize within the function which is to be solved, but I suppose it should be more efficient if GPU usage can be established at the fsolve level.
No. The greatest benefit will be if you GPU-optimize your objective function and Jacobian calculations. The heavy internal computations done by FSOLVE are mainly linear equation solving and other matrix algebra operations. You can benefit those computations best by computing your Jacobian in sparse form, if applicable.
If you are using the trust region algorithm, then you can also use the 'JacobianMultiplyFcn' option appropriately. You can implement that with your own gpuArray operations, but I think sparsity, where it can be applied, will have more of an impact.
  3 comentarios
Sven
Sven el 17 de Sept. de 2017
Thank you for this advice, I have given sparsity to little thought. Though to the high dimensionality I think any improvement is valuable, so I will see where I can apply GPU usage as well.
Will have a look as well at the trust region algo and check if in case I can implement JacobianMultiplyFcn accordingly.
Just thought when GPU could be used at fsolve level the overhead costs of passing the GPUarray forth and back are being reduced. Heard these overhead costs might be an issue in general.
Matt J
Matt J el 17 de Sept. de 2017
Editada: Matt J el 17 de Sept. de 2017
It would be better if fsolve allowed you to return gpuArrays from the objective function code. That way there would be no need to do any CPU-GPU transfers. On the other hand, you would have to have a huge number of equations for the transfer of the objective function vector to significantly slow you down.

Iniciar sesión para comentar.

Categorías

Más información sobre GPU Computing en Help Center y File Exchange.

Etiquetas

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by