Borrar filtros
Borrar filtros

Error training agent DDPG rl.util.Po​licyInstan​ce.get()

3 visualizaciones (últimos 30 días)
Jorge
Jorge el 12 de Mayo de 2020
Respondida: katuysha el 10 de Abr. de 2023
Hi all,
I'm trying to train my own DDPG agent for my hexapod robot the template model from the biped robot model from mathworks (biped robot).
I have already modify the simulink model to add my hexapod robot from simechanics, and try that it learns to stand up (the initial position is lay down on the ground), but when I try to train the DDPG agent I have the following error:
Error using rl.env.AbstractEnv/simWithPolicy (line 70)
An error occurred while simulating "rlClheroRobot" with the agent "rl.util.PolicyInstance.get()".
Error in rl.task.dq.ParCommTrainTask/runImpl (line 109)
[varargout{1},varargout{2}] = simWithPolicy(this.Env,this.Agent,simOpts);
Error in rl.task.Task/run (line 21)
[varargout{1:nargout}] = runImpl(this);
Error in rl.task.TaskSpec/internal_run (line 159)
[varargout{1:nargout}] = run(task);
Caused by:
Error using rl.env.SimulinkEnvWithAgent>localHandleSimoutErrors (line 689)
Invalid observation type or size.
Error using rl.env.SimulinkEnvWithAgent>localHandleSimoutErrors (line 689)
Input data dimensions must match the dimensions specified in the corresponding observation and action info specifications.
But I don't know how to solve this problem or where it is produced.
My project can be downloaded from this github, and you only have to run the live script "agente_entrenamiento.mlx".

Respuestas (1)

katuysha
katuysha el 10 de Abr. de 2023
You need to check that the dimensions of the observations set in the training program are the same as the number of observations in the simulink model.

Productos

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by