RL DDPG agent does not seem to learn, aircraft control problem

3 visualizaciones (últimos 30 días)
Leonardo Molino
Leonardo Molino el 2 de Ag. de 2024
Comentada: Leonardo Molino el 6 de Ag. de 2024
Hello everyone,
I’m back with some updates on my mixed Reinforcement Learning (RL) and Supervised Learning training. A few days ago, I posted a question here on MathWorks about the working principle of “external actions” in the RL training block. Based on the suggestions I received, I have started a hybrid training approach.
I begin by injecting external actions from the controller for 75 seconds (1/4 of the entire episode length). After this, the agent takes action until the pitch rate error reaches 5 degrees per second. When this threshold is reached, the external agent takes control again. The external actions are then cut off when the pitch rate is very close to 0 degrees per second for about 40 seconds. The agent then takes control again, and this cycle continues.
I have also introduced a maximum number of allowed interventions. If the agent exceeds this threshold, the simulation stops and a penalty is applied. I also apply a penalty every time the external controller must intervene again, while a bonus is given every time the agent makes progress within the time window when it is left alone. This system of bonuses and penalties is added to the standard reward, which takes into account the altitude error, the flight path angle error, and the pitch rate error. The weight coefficients for these errors are 1, 1, and 10, respectively, because I want to emphasize that the aircraft must maintain level wings.
The initial conditions are always random, and the setpoint for altitude is always set 50 meters above the initial altitude.
Unfortunately, after the first training session, I haven’t seen any progress. According to your opinion, is it worth taking another attempt or is the whole setup wrong? Thank you.
  6 comentarios
Umar
Umar el 6 de Ag. de 2024
Hi @ Leonardo Molino,
My suggestion would be start by checking the learning rates, network complexity, and the quality of the training data. Additionally, monitor the loss curves and rewards during training which help you provide insights into the model's performance. Hope this helps.

Iniciar sesión para comentar.

Respuestas (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by