Main Content

rlSARSAAgentOptions

Options for SARSA agent

Description

Use an rlSARSAAgentOptions object to specify options for creating SARSA agents. To create a SARSA agent, use rlSARSAAgent

For more information on SARSA agents, see SARSA Agent.

For more information on the different types of reinforcement learning agents, see Reinforcement Learning Agents.

Creation

Description

opt = rlSARSAAgentOptions creates an rlSARSAAgentOptions object for use as an argument when creating a SARSA agent using all default settings. You can modify the object properties using dot notation.

opt = rlSARSAAgentOptions(Name=Value) creates the options set opt and sets its properties using one or more name-value arguments. For example, rlSARSAAgentOptions(DiscountFactor=0.95) creates an option set with a discount factor of 0.95. You can specify multiple name-value arguments.

example

Properties

expand all

Sample time of the agent, specified as a positive scalar or as -1.

Within a MATLAB® environment, the agent is executed every time the environment advances, so, SampleTime does not affect the timing of the agent execution.

Within a Simulink® environment, the RL Agent block that uses the agent object executes every SampleTime seconds of simulation time. If SampleTime is -1 the block inherits the sample time from its input signals. Set SampleTime to -1 when the block is a child of an event-driven subsystem.

Note

Set SampleTime to a positive scalar when the block is not a child of an event-driven subsystem. Doing so ensures that the block executes at appropriate intervals when input signal sample times change due to model variations.

Regardless of the type of environment, the time interval between consecutive elements in the output experience returned by sim or train is always SampleTime.

If SampleTime is -1, for Simulink environments, the time interval between consecutive elements in the returned output experience reflects the timing of the events that trigger the RL Agent block execution, while for MATLAB environments, this time interval is considered equal to 1.

This property is shared between the agent and the agent options object within the agent. Therefore, if you change it in the agent options object, it gets changed in the agent, and vice versa.

Example: SampleTime=-1

Discount factor applied to future rewards during training, specified as a positive scalar less than or equal to 1.

Example: DiscountFactor=0.9

Options for epsilon-greedy exploration, specified as an EpsilonGreedyExploration object with the following properties.

PropertyDescriptionDefault Value
EpsilonProbability threshold to either randomly select an action or select the action that maximizes the state-action value function. A larger value of Epsilon means that the agent randomly explores the action space at a higher rate.1
EpsilonMinMinimum value of Epsilon0.01
EpsilonDecayDecay rate0.0050

At each interaction with the environment (that is at each training step) if Epsilon is greater than EpsilonMin, then it is updated using the following formula.

Epsilon = Epsilon*(1-EpsilonDecay)

Note that Epsilon is conserved between the end of an episode and the start of the next one. Therefore, it keeps on uniformly decreasing over multiple episodes until it reaches EpsilonMin.

If your agent converges on local optima too quickly, you can promote agent exploration by increasing Epsilon.

To specify exploration options, use dot notation after creating the rlSARSAAgentOptions object opt. For example, set the epsilon value to 0.9.

opt.EpsilonGreedyExploration.Epsilon = 0.9;

Critic optimizer options, specified as an rlOptimizerOptions object. It allows you to specify training parameters of the critic approximator such as learning rate, gradient threshold, as well as the optimizer algorithm and its parameters. For more information, see rlOptimizerOptions and rlOptimizer.

Example: CriticOptimizerOptions = rlOptimizerOptions(LearnRate=5e-3)

Options to save additional agent data, specified as a structure containing the following fields.

  • Optimizer

  • PolicyState

You can save an agent object in one of the following ways:

  • Using the save command

  • Specifying saveAgentCriteria and saveAgentValue in an rlTrainingOptions object

  • Specifying an appropriate logging function within a FileLogger object.

When you save an agent using any method, the fields in the InfoToSave structure determine whether the corresponding data is saved with the agent. For example, if you set the Optimizer field to true, then the critic optimizer is saved along with the agent.

You can modify the InfoToSave property only after the agent options object is created.

Example: options.InfoToSave.Optimizer=true

Option to save the critic optimizer, specified as a logical value. For example, if you set the Optimizer field to false, then the critic optimizer (which is a hidden property of the agent and can contain internal states) is not saved along with the agent, therefore saving disk space and memory. However, when the optimizers contains internal states, the state of the saved agent is not identical to the state of the original agent.

Example: true

Option to save the state of the explorative policy, specified as a logical value. If you set the PolicyState field to false, then the state of the explorative policy (which is a hidden agent property) is not saved along with the agent. In this case, the state of the saved agent is not identical to the state of the original agent.

Example: true

Object Functions

rlSARSAAgentSARSA reinforcement learning agent

Examples

collapse all

Create an rlSARSAAgentOptions object that specifies the agent sample time.

opt = rlSARSAAgentOptions(SampleTime=0.5)
opt = 
  rlSARSAAgentOptions with properties:

                  SampleTime: 0.5000
              DiscountFactor: 0.9900
    EpsilonGreedyExploration: [1x1 rl.option.EpsilonGreedyExploration]
      CriticOptimizerOptions: [1x1 rl.option.rlOptimizerOptions]
                  InfoToSave: [1x1 struct]

You can modify options using dot notation. For example, set the agent discount factor to 0.95.

opt.DiscountFactor = 0.95;

Version History

Introduced in R2019a