DDPG current controller for RL load shows steady-state offset in id/iq after training (adapted from "Train TD3 Agent for PMSM Control")
    30 visualizaciones (últimos 30 días)
  
       Mostrar comentarios más antiguos
    
    Syed Mohammad Maaz
 el 8 de Oct. de 2025 a las 17:27
  
    
    
    
    
    Editada: Syed Mohammad Maaz
 el 8 de Oct. de 2025 a las 17:28
             I adapted the official example “Train TD3 Agent for PMSM Control” (can be found here) to a a simple current controller for RL load and trained a very similar DDPG agent. Training looks stable (reward converges but not zero), but when I run the model after training I see a steady-state offset in both id and iq.
The official TD3 PMSM example also shows a small steady-state current offset and states it’s within ~2%. My DDPG variant exhibits the same behavior but with higher offset. I’d like guidance on eliminating the offset (or best practice to do so) rather than accepting the offset.
I have have also uploaded the three modified files as well for this simple current controller.
0 comentarios
Respuestas (0)
Ver también
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!
