MBPO with simulink env,will the reward defined in simulink model overwrite the rewardFcn handle defined in .m file?

2 visualizaciones (últimos 30 días)
i am currently using matlab 2023a, in the MBPO for cartpole example,the reward function and isDone function are defined in .m file,
this is the following code in example:
lgenerativeEnv = rlNeuralNetworkEnvironment(obsInfo,actInfo, ...[transitionFcn,transitionFcn2,transitionFcn3],@myRewardFunction,@myIsDoneFunction);
now i want to use a simulink model,will the reward defined in simulink model overwrite the rewardFcn handle defined in .m file?

Respuestas (1)

Yatharth
Yatharth el 11 de Oct. de 2023
Hi Bin,
I understand that you have a custom "Reward" and "IsDone" function defined in MATLAB, and you have created an environment using the "rlNeuralNetworkEnvironment" function.
Since you are mentioning you have defined a reward function in the Simulink Model too, I am curious how you are able to achieve that.
However, ideally the reward function defined in the Simulink model will not overwrite the reward function defined in the .m file. In the code you provided, the reward function defined in the .m file is explicitly passed as an argument to the “rlNeuralNetworkEnvironment” constructor.
The “reward” function defined in the .m file will be used by the “rlNeuralNetworkEnvironment” object when computing the reward during the training or simulation process. Since the reward is calculated in the environment itself.
You can refer to the following page to check your reward function in the simulation.
I hope this helps.

Etiquetas

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by