How can i scale the action of DDPG agent in Reinforcement Learning?
4 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
Hello everyone ,
I have an enveriment in simulink whose action should be vary between 0-1. Althought i am using sigmoidLayer at the final layer of the actor, in some episode the action exceed the boundry of 0-1 in the trainig process.
So, how can i fix it?
Maybe the "scailingLayer" help for it, but i don't know all values of the action in whole trainig process. So, the value of the bias and scale in "scailingLayer" command is unknown.
Is there any solution ?
Thax for any help.
0 comentarios
Respuestas (2)
Sam Chak
el 1 de Ag. de 2023
Hi @awcii
Sound like a constraint to me. This example shows how to train the RL agent for Lane Keeping Assist, where the front steering angle (agent) is only capable of being steered from –15° to 15°.
Hope it helps!
0 comentarios
Emmanouil Tzorakoleftherakis
el 9 de Ag. de 2023
DDPG training works by adding noise on top of the actor output to promote exploration. In that case you may see constraint violations, so you can adjust the noise options under ddpg training options (specifically mean and variance) or you can handle the violation on the environment side by adding saturation blocks.
0 comentarios
Ver también
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!