hardware resources for training deep neural network

Hello,
I would like to perform training using multiple CPU, for each call of the function trainNetwork().
According to the documentation, I have specified in ExecutionEnvironment = 'parallel'; to use the local parallel pool.
However, during training, the current trace appears "Training on single CPU", which I suppose indicates that is using only one CPU.
Is it supported multiple CPU training? If so, do you have any suggestion on how to modify the parameters to support multiple CPU training?
Is there any conflict with parfor instruction?
Thanks in advance for your help.
All the best.

 Respuesta aceptada

Srivardhan Gadila
Srivardhan Gadila el 13 de Feb. de 2020
Please refer to the following link.

3 comentarios

Andrea Bonfante
Andrea Bonfante el 13 de Feb. de 2020
Editada: Andrea Bonfante el 13 de Feb. de 2020
Thank you. Now, with the env variable "CUDA_VISIBLE_DEVICES" set to -1 I can train across multiple CPUs, as confirmed by the print ("Training across multiple CPUs."). However, I had to disable the use of parfor because of this error: "A parallel pool cannot be started from a worker, only from a client MATLAB."
In case I have to repeat the training with differrent random weights initializations, do you think it is faster to lunch with parfor parallel trainings with single CPU or sequential training with multiple CPUs?
I would suggest you to set ExecutionEnvironment = 'parallel' in trainingOptions and train the network instead of parfor.
Hello Mr.
i need to use multiple CPU instead of hardware resource: single CPU in order to speed the network up, so do you have any suggestion for me?
thanks

Iniciar sesión para comentar.

Más respuestas (0)

Categorías

Más información sobre Deep Learning Toolbox en Centro de ayuda y File Exchange.

Productos

Versión

R2019b

Preguntada:

el 10 de Feb. de 2020

Comentada:

el 5 de Oct. de 2020

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by