Reinforcement Learning Toolbox: Episode Q0 stopped predicting after a few thousand simulations. DQN Agent.
5 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
Q0 values were pretty ok until episode 2360, it's not stuck, just increasing very very slowly
I'm using the default generated DQN agent (with continuous observations and discrete actions) with only a few modifications. I'm not sure I understand what the issue is here or if this is the correct behaviour and this means my agent has converged to a somewhat stable result.
I understood, from documentation, that Episode Q0 should give a prediction of the "true discounted long-term reward", I assumed this meant the discounted reward for each single episode regardless of the convergence or lack thereof, but maybe I understood something wrong.
Please help clarify. I made several runs and they all display the same behaviour over a few thousand episodes (no always the same amount)
____
The changes I made were only these ones:
critic.Options = rlRepresentationOptions(...
'LearnRate',1e-3,...
'GradientThreshold',1,...
'UseDevice','gpu');
% extract agent options
agentOpts = agent.AgentOptions;
% modify agent options
agentOpts.EpsilonGreedyExploration.EpsilonDecay = 0.005;
agentOpts.DiscountFactor = 0.1;
% resave agent with new options
agent = rlDQNAgent(critic,agentOpts);
2 comentarios
Emmanouil Tzorakoleftherakis
el 9 de Jun. de 2021
Hello,
This behavior is strange, I would create a technical support case so that we can take a closer look if possible.
Respuestas (0)
Ver también
Categorías
Más información sobre Environments en Help Center y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!