Transfer a variable at the end of one episode to the next episode when training an agent in RL
9 views (last 30 days)
Show older comments
Jayalath Achchige Damsara Udan Jayarathne on 25 Jul 2022
Answered: Emmanouil Tzorakoleftherakis on 26 Jan 2023
I am training an agent using the RL toolbox. I have created a custom environment in Simulink that contains information on the states. I have a variable (theta) which is calculated based on the states. Also, this variable is defined in such as way that, the calculated value of theta at end of one episode will be the input to calculate the theta for the next episode. (Basically, at the end of an episode I calculate theta and that theta value needs to be carried to the next episode).
I tried a couple of methods, and nothing seems to work. I guess the problem I have is even though I can get the value of theta, I don’t have a method to save it effectively so that the same value can be used in the next episode. I would be grateful if someone could direct me in the right direction to solve this problem.
Thanks a lot in advance.
Emmanouil Tzorakoleftherakis on 26 Jan 2023
The way to do this would be using the reset function mechanism provided in Reinforcement Learning Toolbox. Please take a look at the answer in this post. You can use a similar approach for theta
Find more on Reinforcement Learning in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!Start Hunting!