- In your "rlTrainingOptions" object, if "UseParallel" is set to "True", and the actor and critic are set to use GPU, then MATLAB will automatically use multiple GPUs for training. In this case, calling "train" in a "parfor" or "spmd" is not supported.
- If in the "rlTrainingOptions" object, "UseParallel" is set to "False" and the actor and critic are set to use GPU, you may call "train" in a "parfor" loop.
How can I optimize GPU usage while training multiple RL PPO Agents using multiple GPUs?
11 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
MathWorks Support Team
el 6 de Mzo. de 2024
Respondida: MathWorks Support Team
el 18 de Mzo. de 2024
I wish to train multiple PPO agents asynchronously and using multiple GPUs. What is the best way to optimize GPU and CPU resources to achieve this?
Respuesta aceptada
MathWorks Support Team
el 6 de Mzo. de 2024
If the network size is small, the best way to train would be to just train on CPU in a parallel pool instead of using a GPU, with an appropriate number of workers. This may be the most effective workaround considering that PPO tends to be better with larger training datasets and network sizes may not be big enough to impact a huge change by training on GPU instead.
If training on GPUs, please ensure that you restrict the parallel pool worker count to the same number as the number of GPUs available. This way, each worker can access a unique GPU and perform training. For more information on training using multiple GPUs, please refer to the following page:
With reference to the information in the above link, please keep the following additional information in mind:
0 comentarios
Más respuestas (0)
Ver también
Categorías
Más información sobre Parallel and Cloud en Help Center y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!