Create SARSA reinforcement learning agent


agent = rlSARSAAgent(critic)
agent = rlQAgent(critic,opt)


agent = rlSARSAAgent(critic) creates a SARSA agent with default options and the specified critic representation. For more information on SARSA agents, see SARSA Agents.


agent = rlQAgent(critic,opt) creates a SARSA agent using the specified agent options to override the agent defaults.


collapse all

Create an environment interface.

env = rlPredefinedEnv("BasicGridWorld");

Create a critic value function representation using a Q table derived from the environment observation and action specifications.

qTable = rlTable(getObservationInfo(env),getActionInfo(env));
critic = rlRepresentation(qTable);

Create a SARSA agent using the specified critic value function and an epsilon value of 0.05.

opt = rlSARSAAgentOptions;
opt.EpsilonGreedyExploration.Epsilon = 0.05;
agent = rlSARSAAgent(critic,opt);

Input Arguments

collapse all

Critic network representation, specified as an rlTableRepresentation object created using rlRepresentation. For more information on creating critic representations, see Create Policy and Value Function Representations.

Agent options, specified as an rlSARSAAgentOptions object.

Output Arguments

collapse all

SARSA agent, returned as an rlSARSAAgent object.

Introduced in R2019a