Simulink interruption during RL training
Mostrar comentarios más antiguos
Hey everyone,
Anyone who has used reinforcement learning (RL) to train on physical models in Simulink knows that during the initial training phase, random exploration often triggers assertions or other instabilities that can cause Simulink to crash or diverge. This makes it very difficult to use the official train function provided by MathWorks, because once Simulink crashes, all the RL experience (replay buffer) is lost—essentially forcing you to start training from scratch each time.
So far, the only workaround I’ve found is to wrap the training process in an external try-catch block. When a failure occurs, I save the current agent parameters and load them again at the start of the next training run. But as many of you know, this slows down training by 100x or more.
Alternatively, one could pre-train on a simpler case and then fine-tune on the full model, but that’s not always feasible.
Has anyone discovered a better way to handle this?
Respuesta aceptada
Más respuestas (0)
Categorías
Más información sobre Training and Simulation en Centro de ayuda y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!