rlSimulinkEnv

Create a reinforcement learning environment using a dynamic model implemented in Simulink

Syntax

env = rlSimulinkEnv(mdl,agentBlock,obsInfo,actInfo)
env = rlSimulinkEnv(___,'UseFastRestart',fastRestartToggle)

Description

example

env = rlSimulinkEnv(mdl,agentBlock,obsInfo,actInfo) creates a reinforcement learning environment object env using the Simulink® model name mdl, the path to the agent block agentBlock, observation information obsInfo, and action information actInfo.

env = rlSimulinkEnv(___,'UseFastRestart',fastRestartToggle) creates a reinforcement learning environment object env with additional option to enable fast restart.

Examples

collapse all

For this example, consider the rlSimplePendulumModel Simulink model. The model is a simple frictionless pendulum that is initially hanging in a downward position.

Open the model.

mdl = 'rlSimplePendulumModel';
open_system(mdl)

Assign the agent block path information, and create rlNumericSpec and rlFiniteSetSpec objects for the observation and action information. You can use dot notation to assign property values of the rlNumericSpec and rlFiniteSetSpec objects.

agentBlk = [mdl '/RL Agent'];
obsInfo = rlNumericSpec([3 1])
obsInfo = 
  rlNumericSpec with properties:

     LowerLimit: -Inf
     UpperLimit: Inf
           Name: [0×0 string]
    Description: [0×0 string]
      Dimension: [3 1]
       DataType: "double"

actInfo = rlFiniteSetSpec([2 1])
actInfo = 
  rlFiniteSetSpec with properties:

       Elements: [2 1]
           Name: [0×0 string]
    Description: [0×0 string]
      Dimension: [1 1]
       DataType: "double"

obsInfo.Name = 'observations';
actInfo.Name = 'torque';

Create the reinforcement learning environment for the Simulink model using information extracted in the previous steps.

env = rlSimulinkEnv(mdl,agentBlk,obsInfo,actInfo)
env = 
  SimulinkEnvWithAgent with properties:

             Model: "rlSimplePendulumModel"
        AgentBlock: "rlSimplePendulumModel/RL Agent"
          ResetFcn: []
    UseFastRestart: 'on'

You can also include a reset function using dot notation. For this example, consider randomly initializing theta0 in the model workspace.

env.ResetFcn = @(in) setVariable(in,'theta0',randn,'Workspace',mdl)
env = 
  SimulinkEnvWithAgent with properties:

             Model: "rlSimplePendulumModel"
        AgentBlock: "rlSimplePendulumModel/RL Agent"
          ResetFcn: @(in)setVariable(in,'theta0',randn,'Workspace',mdl)
    UseFastRestart: 'on'

Input Arguments

collapse all

Simulink model name, specified as a string or character vector.

Agent block path, specified as a string or character vector.

For more information, see RL Agent.

Observation information, specified as an array of one of the following:

For more information, see getObservationInfo.

Action information, specified as an array of one of the following:

For more information, see getActionInfo.

Option to toggle fast restart, specified as either 'on' or 'off'. Fast restart allows you to perform iterative simulations without compiling a model or terminating the simulation each time.

For more information on fast restart, see How Fast Restart Improves Iterative Simulations (Simulink).

Output Arguments

collapse all

Reinforcement learning environment, returned as a SimulinkEnvWithAgent object.

For more information on reinforcement learning environments, see Create Simulink Environments for Reinforcement Learning.

Introduced in R2019a