How do I solve this error?
2 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
Apoorv Pandey
el 24 de Mzo. de 2023
Comentada: Cris LaPierre
el 27 de Mzo. de 2023
I am getting this error when I try to train a TD3 RL agent.
Thanking You
Apoorv Pandey
1 comentario
Emmanouil Tzorakoleftherakis
el 24 de Mzo. de 2023
If you share a reproduction model it would be easier to debug
Respuesta aceptada
Cris LaPierre
el 24 de Mzo. de 2023
When defining your rlQValueFunction, include the ActionInputNames and OvservationInputNames name-value pairs.
See this example: https://www.mathworks.com/help/reinforcement-learning/ref/rl.function.rlqvaluefunction.html#mw_da4065e4-5b9a-41c6-b11b-6692d8698a76
% Observation path layers
obsPath = [featureInputLayer( ...
prod(obsInfo.Dimension), ...
Name="netObsInput")
fullyConnectedLayer(16)
reluLayer
fullyConnectedLayer(5,Name="obsout")];
% Action path layers
actPath = [featureInputLayer( ...
prod(actInfo.Dimension), ...
Name="netActInput")
fullyConnectedLayer(16)
reluLayer
fullyConnectedLayer(5,Name="actout")];
%<snip>
critic = rlQValueFunction(net,...
obsInfo,actInfo, ...
ObservationInputNames="netObsInput",...
ActionInputNames="netActInput")
2 comentarios
Cris LaPierre
el 27 de Mzo. de 2023
Please share your data and your code. You can attach files using the paperclip icon. If it's easier,save your workspace variables to a mat file and attach that.
Más respuestas (0)
Ver también
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!