Different observation matrix in reinforcement learning episode
4 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
Hi Everyone,
I want to train an agent in a deep-Q reinforcement learning setting. But for every episode i want the agent to oberserve or read a different dimension in a large matrix stored in a .mat file. In otherwords i want to the agent to read a different row or a colomn of a matrix in every new episode of training.
Can anyone guide how can this be done in RL tool box in Matlab. I am also attaching a screenshot of the Simulink environment for reinforcement learning ?
Regards
Mohsan.
0 comentarios
Respuestas (1)
Poorna
el 29 de Sept. de 2023
Hi Mohsan,
I understand that you would like the observation at the start of each episode to be a random or predefined sequence of a row in the observation matrix you have.
To achieve this, you can use the "ResetFcn" callback property of the environment in your model. The reset function sets the environment to an initial state and computes the initial value of the observation.
You can create a custom callback function that contains the logic to select the required observation from the observation matrix and returns it. You can then set this function to the “ResetFcn” callback property of the environment. This function will be called by the "train" function at the beginning of each training episode.
For more information on how to use the "ResetFcn" callback property, please refer to the following MATLAB documentation https://www.mathworks.com/help/reinforcement-learning/ref/rl.env.rlfunctionenv.html
Hope this Helps!
0 comentarios
Ver también
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!