Borrar filtros
Borrar filtros

Bipedal walking robot TD3 training example bad convergence

1 visualización (últimos 30 días)
Tech Logg Ding
Tech Logg Ding el 6 de Abr. de 2021
Editada: Tech Logg Ding el 6 de Abr. de 2021
Hi all,
I have attempted to run the bipedal walking robot example training myself and it converged to an suboptimal solution. I used the TD3 agent training and also used gpu to host my actor and critic.
The final simulation shows that the robot learnt to fall at the start of the simulation. Why does my training produce significantly different results compared to the example? Did hosting the networks on the gpu caused this?
Here's the training plot. Note that the maximum reward was only 35 compared to the 250 shown in the example.
Thank you :)

Respuestas (0)

Categorías

Más información sobre Robotics en Help Center y File Exchange.

Productos


Versión

R2021a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by