GPU Recommendation for Parallel Computing

2 visualizaciones (últimos 30 días)
Josh Coval
Josh Coval el 14 de Feb. de 2019
Comentada: Walter Roberson el 14 de Feb. de 2019
Hi. I am trying speed up a boosted tree learning algorithm via parfor. I have been able to get it running on AWS, but this hasn't proven to be an ideal solution for development work, as AWS charges a lot for keeping the cluster in an online state and takes a fair amount of time to change the state from offline to online. And so, I am interested in exploring the possibility of doing some of the development work using a local GPU cluster instead of AWS. Can you recommend a decent GPU (@ ~$1000) for a problem that requires 100-500 iterations, each of which takes around 3 minutes to run in serial on a decent laptop, and relies on around 200MB of data to be passed and processed by each worker? Or is this not a sensible route to pursue given my problem and budget? I just don't have a good sense of the extent to which such a problem could be parallelized using a single GPU (and whether the memory or the processing capacity of the individual GPU workers will be the binding constraint).
  4 comentarios
Matt J
Matt J el 14 de Feb. de 2019
It's a good start, but we need to see the slow part of the code, presumably growForestp, if we're to recommend ways to optimize it.
Josh Coval
Josh Coval el 14 de Feb. de 2019
I'm afraid I may get into trouble if I post much of growForestp (and it also has a large number of lines). That having been said, I'm not really looking to optimize the growForestp code so much as identify a good hardware setup that will allow it to be run in parallel locally instead of on AWS. But I totally understand that this may not give you enough information for you to provide any additional guidance -- and I do appreciate your pointing out that a single local GPU will be a poor substitute.

Iniciar sesión para comentar.

Respuesta aceptada

Matt J
Matt J el 14 de Feb. de 2019
Editada: Matt J el 14 de Feb. de 2019
Well, the one general thing I can say is that if you convert all of the variables data1...data5 variables to gpuArray objects, then the manipulations done by growForestp would likely be considerably faster assuming they consist of a lot of matrix arithmetic. In other words, you can use the GPU to gain speed in other ways besides just deploying parallel instances of growForestp.
I don't know what kind of GPU resources the AWS offers. Maybe each cluster node has its own GPU? If you want to implement on your own local cluster sharing a single GPU, I would probably go with the GeForce GTX Titan X (which has 12 GB RAM) or the GeForce GTX 1080 Ti (which has 9 GB RAM). That should easily accomodate jobs from at least 20 parallel workers. Of course, I am not sure what the communications overhead would be from 20 workers trying to share/access a single GPU card...
  2 comentarios
Josh Coval
Josh Coval el 14 de Feb. de 2019
super helpful. thanks!
Walter Roberson
Walter Roberson el 14 de Feb. de 2019
Mathworks recommends against sharing a gpu between parallel workers . The communication overhead of synch is one of the most expensive gpu operations .

Iniciar sesión para comentar.

Más respuestas (0)

Categorías

Más información sobre Parallel and Cloud en Help Center y File Exchange.

Productos


Versión

R2018a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by