problem with simulation trained DRL agent

Hello,
I implemented deep reinforcement learning in Matlab based on a custom template and saved some agents with high rewards. I was plotting signals in the training phase in each episode and can see the desired performance. I saved all state and control effort (Action) in each episode. My action space is as follows:
numAct = 1;
ActionInfo = rlNumericSpec([numAct 1], 'LowerLimit' ,-0.4189, 'UpperLimit' ,0.4189);
I have a problem with the simulation of the trained agent.
The first figure is one of the results of a training phase and part of the variation of it's action value.
After the simulation, with the below command,
simOptions = rlSimulationOptions('MaxSteps',maxSteps);
experience = sim(env,agent,simOptions);
or for saved agent
experience = sim(env,saved_agent,simOptions);
The result is wrong according to the below figure.
I checked the final agent and some of the high rewards agents. But, the results are similar to the above figure.
After the simulation of the trained agent the action is fixed to lower or upper values of action space acording to above figure for all simulated agents!
Thank you for any help you can offer.

Respuestas (1)

Emmanouil Tzorakoleftherakis
Emmanouil Tzorakoleftherakis el 26 de Dic. de 2020

1 voto

Hello,
Please see this post that goes over a few potential reasons for discrepancies between training results and simulation results.
Looking at the actions and plots above, it seems to me that agent stopped epxloring somewhere along the way (in which case you would need to adjust exploration options in your custom algorithm). Make sure to also keep track of the individual episode rewards to get an idea of which agents lead to higher rewards.

3 comentarios

beni hadi
beni hadi el 3 de En. de 2021
Hello,
I appreciate your answer,
Based on the action range that is between -0.4189 and 0.4189. The variance is selected to satisfy the following equation:
1% action range <Variance*sqrt(Ts)< 10% action range
so i selected variance = 0.6
The training scenario is that a robot is guided to reach the goal. When the target is located at a distance of 20 meters, it successfully reaches the target. But when the target is transferred to 100 meters, the training is not successful and does not reach the target.
Apart from variance and VarianceDecayRate, what other factors affect exploration?
Emmanouil Tzorakoleftherakis
Emmanouil Tzorakoleftherakis el 4 de En. de 2021
Editada: Emmanouil Tzorakoleftherakis el 4 de En. de 2021
If you have these settings right, it may not be an exploration issue. You are saying that if the target us further away the robot does not reach it - could it be that the problem is not feasible, i.e. the target is too far away to reach within a single episode? If that's the case, maybe increasing the episode duration or adjusting action limits (if any) may help.
beni hadi
beni hadi el 4 de En. de 2021
Thanks.

Iniciar sesión para comentar.

Preguntada:

el 25 de Dic. de 2020

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by