Why my RL Agent action still passing the upper and lower limit ?
10 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
I am using Policy Gradient Agent, I want that my action only in range 0 - 100 and i already set up my UpperLimit to 100, and LowerLimit to 0. But as you can see -scope display 3-, my action still can passing the limit. How can i fix that ?
2 comentarios
Emmanouil Tzorakoleftherakis
el 9 de Jun. de 2021
which one is the action here? How does your actor network look like?
denny
el 7 de Dic. de 2021
I have solve my similar problem.
actInfo =rlNumericSpec([ 1],'UpperLimit',0.0771,'LowerLimit',-0.0405)
it means the minimum value is -0.0405, the maximum value is -0.0405+0.0771*2.
but your output is -1000 to 1000, I also donot know it.
Respuestas (2)
Azmi Yagli
el 5 de Sept. de 2023
Editada: Azmi Yagli
el 5 de Sept. de 2023
If you look at rlNumericSpec, you can see this on LoweLimit or UpperLimit section.
DDPG, TD3 and SAC agents use this property to enforce lower limits on the action. When using other agents, if you need to enforce constraints on the action, you must do so within the environment.
So if you use other algorithms you can use saturation, but it didn't work for me.
You can try discretize actions of your agent so it can have boundaries.
Or you can give negative reward, if your agent exceeds limits for action.
0 comentarios
Ver también
Categorías
Más información sobre Deep Learning Toolbox en Help Center y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!