Transfer a variable at the end of one episode to the next episode when training an agent in RL

9 views (last 30 days)
Hi all,
I am training an agent using the RL toolbox. I have created a custom environment in Simulink that contains information on the states. I have a variable (theta) which is calculated based on the states. Also, this variable is defined in such as way that, the calculated value of theta at end of one episode will be the input to calculate the theta for the next episode. (Basically, at the end of an episode I calculate theta and that theta value needs to be carried to the next episode).
I tried a couple of methods, and nothing seems to work. I guess the problem I have is even though I can get the value of theta, I don’t have a method to save it effectively so that the same value can be used in the next episode. I would be grateful if someone could direct me in the right direction to solve this problem.
Thanks a lot in advance.
Jayalath Achchige Damsara Udan Jayarathne
Yes, I figured out a method. First, I run one episode and update the varibale in the workspace. Then, perform whatever the calculation in the workspace and update the simulink. Then, continue the simulation for another episode and do as before. This is not the most efficent way, but it gets the work done. Thanks

Sign in to comment.

Answers (1)

Emmanouil Tzorakoleftherakis
The way to do this would be using the reset function mechanism provided in Reinforcement Learning Toolbox. Please take a look at the answer in this post. You can use a similar approach for theta

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by