Problems with reward generation in reinforcement learning simulation

10 visualizaciones (últimos 30 días)
Hi everyone,
I am currently running a reinforcement learning model, integrated with simevents blocks in simulink. I have both a reinforcement learning script and the RL agent present in the simulink. Currently my reward function works based on a matlab function block that is connected to the reward input of RL agent block. I am facing an issue of constant reward generated throughout the entire episodes of RL iteration. Any ideas why? Because I try to assign the reward function (code below) as simple as possible, extract values from the entities of simevents, to generate values that are supposed to be different with each iteration.
function r = w(u1, u2, u3) %#codegen
% Extract Entities
FH = u1 + 1;
Cost = u2 + 1;
Downtime = u3 + 1;
% Reward calculation based on values
r = (Downtime/Cost) * FH;
end
There seems to be a problem as well because this reward area is red, eventhough the simulation runs normally.
I uploaded my model and a screenshot of the RL training result of the reward. If you are interested to replicate my results here are the steps:
  1. Run script A.mlx to generate random number set A
  2. Run script B.mlx to generate random number set B
  3. Run MainScript.mlx to run the simulation
Thank you so much in advance! Let me know should you require any further information.
Best,
Aaron.

Respuestas (1)

Subhajyoti
Subhajyoti el 13 de Sept. de 2024
It is my understanding that you are trying to train RL model, but the reward function is not updating as expected.
This is happening because the values of ‘FH’, ‘Cost’, and ‘Downtimeare not being updated for the following iterations. For every episode, when the model is calling these values, it is taking the initial default value, generating the constant reward value.
To address this issue, you can either save the values to Workspace after each update or add a feedback loop to pass the updated values to the next iteration.
Refer to the following link for more information on ‘To Workspace’ block in Simulink:
Additionally, you can refer to the following resource to know more about ‘Reward and Observation Signals in Custom Environments’ in MATLAB:
  1 comentario
Aaron Bramhasta
Aaron Bramhasta el 25 de Sept. de 2024
Hi @Subhajyoti thank you for your reply, and apologies for a late response from me.
My model is in a form of a feedback loop already, so the updated values will always be passed for the next iteration. I don't get regarding saving the values to workspace, when should I call these values again?
Also, do you have an idea on why the reward generated from the matlab function, and the reward shown in the training manager differs hugely? The matlab function generates decimals below 1 as it should, but the training manager generates numbers in around 70.
Thanks in advance!

Iniciar sesión para comentar.

Categorías

Más información sobre Environments en Help Center y File Exchange.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by