Stopping conditions for DQN training

7 visualizaciones (últimos 30 días)
Zonghao zou
Zonghao zou el 18 de Oct. de 2020
Respondida: Madhav Thakker el 25 de Nov. de 2020
Hello all,
I am currently playing around with DQN trainning. I am trying to find a systemic way to stop the trainning process rather than to stop it mannually. However, for my trainning process, I have no idea what the end rewards will be and I don't have a target point to reach. Therefore, I do not know when to stop.
Is there a way for me to stop DQN agent without those information and guarentee some type of convergence?
Thanks for helping!

Respuestas (1)

Madhav Thakker
Madhav Thakker el 25 de Nov. de 2020
Hi Zonghao zou,
One possible parameter to consider when stopping training is Q-Values. If the Q-Values are saturated, it means that no learning is happening in the network. You can perhaps look at your Q-values and decide a threshold, to perform early-stopping in the network. You don't need the end-reward or target-point to perform early stopping based on Q-Values.
Hope thi helps.

Etiquetas

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by