Parallel CPU computing for recurrent Neural Networks (LSTMs)

16 visualizaciones (últimos 30 días)
ThomasP
ThomasP el 3 de Feb. de 2022
Respondida: Joss Knight el 7 de Feb. de 2022
Hello,
states that parallel CPU computing for LSTMs is possible using the trainNetwork function and choosing the execution environment as parallel using trainingOptions. It also states that the Parallel Computing Toolbox is necessary.
I do have the Parallel Computing Toolbox installed, writing pool = parpool gives me the number of workers as 23 (the amount of cores my CPU has)
I also added 'ExecutionEnvironment','parallel' to my trainingOptions(), however, I get the error that "Parallel training of recurrent networks is not supported. 'ExecutionEnvironment' value in trainingOptions function must be 'auto', 'gpu' or 'cpu'"
...why?

Respuestas (2)

Raymond Norris
Raymond Norris el 4 de Feb. de 2022
I'm assuming you're only running this on your local machine (with 23 cores)? And I'm assuming you don't have a GPU? If so, set ExecutionEnvironment to "cpu" (or even "auto", which defaults to gpu if it exists and cpu if a gpu doesn't exist).
  2 comentarios
ThomasP
ThomasP el 4 de Feb. de 2022
thanks for your answer, yes I'm running it on my local machine with 23 cores and don't have a gpu, however, if I set ExecutionEnvironment to "cpu" it will only run on a single core
Raymond Norris
Raymond Norris el 4 de Feb. de 2022
Right, fair point. One option is to download the R2022a prelease to see if that resolves your issue.
Keep in mind, "parallel" will default to (any) GPU MATLAB finds. Therefore, you'll want MATLAB to ignore it by first calling
setenv CUDA_VISIBLE_DEVICES -1
and then train your model.

Iniciar sesión para comentar.


Joss Knight
Joss Knight el 7 de Feb. de 2022
That doc page is about shallow networks (using train) rather than deep networks (using trainNetwork). Parallel training in trainNetwork for sequence networks is supported from the next release.
How are you confirming that ExecutionEnvironment 'cpu' is only using a single core? It should be using all your cores.
Parallel training for CPU is only really useful when you have a multi-node cluster of machines. Generally speaking all CPU Deep Learning code is multithreaded and makes full use of your hardware and there is no advantage to parallel training or inference - in fact it should make it slower.

Categorías

Más información sobre Parallel and Cloud en Help Center y File Exchange.

Productos


Versión

R2021b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by