The agent can learn the policy through the external action port in the RL Agent so that the agent mimics the output of the reference signal

4 visualizaciones (últimos 30 días)
I created a DDPG agent that I wanted to learn from the output of an existing controller before training it later. So, I input the reference signal through the external action port, and set the use external action to 1 for training, when training, the output of the agent is the reference signal, but after the training. When I set the use external action to 0 for verification, the output of the agent is not the same as the reference signal, and the difference is a bit big. Does the external action port work with my idea? What should I do to realize my idea?
The figure below shows that when the external action is set to 0, the output of the trained agent is a red curve, and the reference signal is a green curve

Respuestas (1)

Emmanouil Tzorakoleftherakis
Emmanouil Tzorakoleftherakis el 25 de Sept. de 2023
It seems the agent started learning how to imitate the existing controller but needs more time. What does the Episode Manager look like? What is your reward signal?
  2 comentarios
凡
el 26 de Feb. de 2024
This is the Episode Manager,My bonus signal is: -4*u^2-du/dt,u is an observational measurement,My control goal is to make u 0. My project is to replace the PID controller with an agent,In PID control,u is the input quantity,So I want the agent to mimic the output of the PID at the beginning

Iniciar sesión para comentar.

Productos


Versión

R2023a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by