rlFiniteSetSpec

Create discrete action or observation data specifications for reinforcement learning environments

Description

Use rlFiniteSetSpec to create an rlFiniteSetSpec object that defines a finite set of actions or observations.

Creation

Syntax

spec = rlFiniteSetSpec(elements)

Description

example

spec = rlFiniteSetSpec(elements) creates data specification with a discrete set defined by elements.

Input Arguments

expand all

Set of valid values, specified as a numeric vector.

This is equivalent to the Elements property.

Properties

expand all

Set of valid values, specified as a numeric vector.

Name of the rlFiniteSetSpec object, specified as a string.

Description of the rlFiniteSetSpec object, specified as a string.

Size of each element, specified as a vector.

Information about the type of data, specified as a string.

Object Functions

rlSimulinkEnvCreate a reinforcement learning environment using a dynamic model implemented in Simulink
rlFunctionEnvSpecify custom reinforcement learning environment dynamics using functions
rlRepresentationModel representation for reinforcement learning agents

Examples

collapse all

For this example, consider the rlSimplePendulumModel Simulink model. The model is a simple frictionless pendulum that is initially hanging in a downward position.

Open the model.

mdl = 'rlSimplePendulumModel';
open_system(mdl)

Assign the agent block path information, and create rlNumericSpec and rlFiniteSetSpec objects for the observation and action information. You can use dot notation to assign property values of the rlNumericSpec and rlFiniteSetSpec objects.

agentBlk = [mdl '/RL Agent'];
obsInfo = rlNumericSpec([3 1])
obsInfo = 
  rlNumericSpec with properties:

     LowerLimit: -Inf
     UpperLimit: Inf
           Name: [0×0 string]
    Description: [0×0 string]
      Dimension: [3 1]
       DataType: "double"

actInfo = rlFiniteSetSpec([2 1])
actInfo = 
  rlFiniteSetSpec with properties:

       Elements: [2 1]
           Name: [0×0 string]
    Description: [0×0 string]
      Dimension: [1 1]
       DataType: "double"

obsInfo.Name = 'observations';
actInfo.Name = 'torque';

Create the reinforcement learning environment for the Simulink model using information extracted in the previous steps.

env = rlSimulinkEnv(mdl,agentBlk,obsInfo,actInfo)
env = 
  SimulinkEnvWithAgent with properties:

             Model: "rlSimplePendulumModel"
        AgentBlock: "rlSimplePendulumModel/RL Agent"
          ResetFcn: []
    UseFastRestart: 'on'

You can also include a reset function using dot notation. For this example, consider randomly initializing theta0 in the model workspace.

env.ResetFcn = @(in) setVariable(in,'theta0',randn,'Workspace',mdl)
env = 
  SimulinkEnvWithAgent with properties:

             Model: "rlSimplePendulumModel"
        AgentBlock: "rlSimplePendulumModel/RL Agent"
          ResetFcn: @(in)setVariable(in,'theta0',randn,'Workspace',mdl)
    UseFastRestart: 'on'

Introduced in R2019a