rlNumericSpec

Create continuous action or observation data specifications for reinforcement learning environments

Description

Use rlNumericSpec to create an rlNumericSpec object that defines a finite set of actions or observations.

Creation

Syntax

spec = rlNumericSpec(dimension)
spec = rlNumericSpec(dimension,Name,Value)

Description

example

spec = rlNumericSpec(dimension) creates data specification with the shape defined by the vector dimension.

spec = rlNumericSpec(dimension,Name,Value) sets Properties using name-value pair arguments.

Input Arguments

expand all

Dimension of the data space, specified as a numeric vector.

This is equivalent to the Dimension property.

Properties

expand all

Lower limit of the data space, specified as a scalar or matrix of the same size as the data space. When LowerLimit is specified as a scalar, rlNumericSpec applies it to all entries in the data space.

Upper limit of the data space, specified as a scalar or matrix of the same size as the data space. When UpperLimit is specified as a scalar, rlNumericSpec applies it to all entries in the data space.

Name of the rlNumericSpec object, specified as a string.

Description of the rlNumericSpec object, specified as a string.

Dimension of the data space, specified as a numeric vector.

Information about the type of data, specified as a string.

Object Functions

rlSimulinkEnvCreate a reinforcement learning environment using a dynamic model implemented in Simulink
rlFunctionEnvSpecify custom reinforcement learning environment dynamics using functions
rlRepresentationModel representation for reinforcement learning agents

Examples

collapse all

For this example, consider the rlSimplePendulumModel Simulink model. The model is a simple frictionless pendulum that is initially hanging in a downward position.

Open the model.

mdl = 'rlSimplePendulumModel';
open_system(mdl)

Assign the agent block path information, and create rlNumericSpec and rlFiniteSetSpec objects for the observation and action information. You can use dot notation to assign property values of the rlNumericSpec and rlFiniteSetSpec objects.

agentBlk = [mdl '/RL Agent'];
obsInfo = rlNumericSpec([3 1])
obsInfo = 
  rlNumericSpec with properties:

     LowerLimit: -Inf
     UpperLimit: Inf
           Name: [0×0 string]
    Description: [0×0 string]
      Dimension: [3 1]
       DataType: "double"

actInfo = rlFiniteSetSpec([2 1])
actInfo = 
  rlFiniteSetSpec with properties:

       Elements: [2 1]
           Name: [0×0 string]
    Description: [0×0 string]
      Dimension: [1 1]
       DataType: "double"

obsInfo.Name = 'observations';
actInfo.Name = 'torque';

Create the reinforcement learning environment for the Simulink model using information extracted in the previous steps.

env = rlSimulinkEnv(mdl,agentBlk,obsInfo,actInfo)
env = 
  SimulinkEnvWithAgent with properties:

             Model: "rlSimplePendulumModel"
        AgentBlock: "rlSimplePendulumModel/RL Agent"
          ResetFcn: []
    UseFastRestart: 'on'

You can also include a reset function using dot notation. For this example, consider randomly initializing theta0 in the model workspace.

env.ResetFcn = @(in) setVariable(in,'theta0',randn,'Workspace',mdl)
env = 
  SimulinkEnvWithAgent with properties:

             Model: "rlSimplePendulumModel"
        AgentBlock: "rlSimplePendulumModel/RL Agent"
          ResetFcn: @(in)setVariable(in,'theta0',randn,'Workspace',mdl)
    UseFastRestart: 'on'

Introduced in R2019a