DDPG Agent: Noise settings without any visible consequence
Mostrar comentarios más antiguos
Despite having noise settings - as below -, my RL Agent's output sticks to the limits for many consecutive steps (hundreds to thousands).
My understanding of the sequence order is:
- Actor gets observation as input.
- Actor outputs through tanh output in the range of [-1, 1]
- Noise gets added to actor output
- RL Agent outputs actor output plus additive noise
(At least this sequence is my understanding of this: https://de.mathworks.com/matlabcentral/answers/515602-incorrect-tanhlayer-output-in-rl-agent#answer_425717)
Did I get it wrong? What do I miss?
I'm using:
- DDPG Agent
- actor output layer: tanh --> resulting action space: [-1, 1]
- Agent sample time: Ts = 0.0005;
- agentOptions.NoiseOptions.StandardDeviation = 0.89443;
- actionInfo = rlNumericSpec([2 1], 'LowerLimit', [-1; -1], 'UpperLimit', [1; 1]);
- used Ornstein-Uhlenbeck
Besides, if I set rlDDPGAgent('UseEplorationPolicy', true), do I use Gaussian function instead of Ornstein Uhlenbeck?
Respuestas (1)
Emmanouil Tzorakoleftherakis
el 26 de En. de 2023
0 votos
Your standard deviation is very high compared to the action range that you have set. As a result, when noise is added to the tanh output, you are always hitting the limits you have set in your action space definition (which looks like they are [-1, 1]). I would use smaller std
Categorías
Más información sobre Reinforcement Learning Toolbox en Centro de ayuda y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!