Parallel Computing in Neural Networks is not using all the workers in 2018b?

8 visualizaciones (últimos 30 días)
There was a similar question here, but I'm unable to get the parallel pool to use my CPU cores when using a GPU. My command is:
my_net = train(my_net,Xs,Ts,Xi,Ai,'useParallel','yes','useGPU','yes','showResources','yes');
Yet when starting the pool the response is:
NOTICE: Jacobian training not supported on GPU. Training function set to TRAINSCG.
Computing Resources:
Parallel Workers:
Worker 1 on w541, GPU device #1, Quadro K1100M
Worker 2 on w541, Unused
Worker 3 on w541, Unused
Worker 4 on w541, Unused
Worker 5 on w541, Unused

Respuestas (2)

Joss Knight
Joss Knight el 9 de En. de 2019
I believe this is the designed behaviour. If multiple workers were to share the same GPU, you would get a performance reduction, not an improvement.
  4 comentarios
Joss Knight
Joss Knight el 12 de En. de 2019
I am not familiar with the implementation for shallow networks, but for deep learning, even if you filled the GPU memory and gave each CPU the minimum amount of work, the GPU would end up waiting for the CPUs to finish to synchronize each iteration, so the CPUs would just slow things down.
Walter Roberson
Walter Roberson el 13 de En. de 2019
I notice that there is no second GPU being allocated. That leads me to suspect that the Quadro K1100M might be the only GPU in the system. I wonder if it is driving a display? If it is then it would be in WDDM mode, in which case it would need to have short work timeouts, making it necessary to synchronize with the CPUs often compared to the likely total training time. If it is not driving a display and is in TCC mode then that factor is reduced... but of course the time it spends dedicated to processing work from one CPU would be time it was not processing work from a difference CPU.

Iniciar sesión para comentar.


D Hanish
D Hanish el 17 de En. de 2020
You should set "useGPU" to off. It has been some time, you have probably figured that out already. Matlab uses uses one GPU per core and since there is only one GPU, will also only use one core. (pretty much as Walter Roberson said)
On my system (8 core Xeon + 1 GPU) It turns out to be much slower to use one core and GPU with 1 worker than to useParallel alone which gives me 8 workers on 8 real cores. For you, useParallel without useGPU will allow you to use 5 CPUs and Jacobian training. Be careful because Matlab (could be fixed in 2019b) must be restarted before it will use all the cores again.
[net,tr] = train(net,X,T,'UseParallel','yes','useGPU','no','showResources','yes','CheckpointFile','MyCheckpoint','CheckpointDelay',600);

Categorías

Más información sobre Deep Learning Toolbox en Help Center y File Exchange.

Productos


Versión

R2018b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by