rlQAgent

Create Q-learning reinforcement learning agent

Syntax

agent = rlQAgent(critic)
agent = rlQAgent(critic,opt)

Description

agent = rlQAgent(critic) creates a Q-learning agent with default options and the specified critic representation. For more information on Q-learning agents, see Q-Learning Agents.

example

agent = rlQAgent(critic,opt) creates a Q-learning agent using the specified agent options to override the agent defaults.

Examples

collapse all

Create an environment interface.

env = rlPredefinedEnv("BasicGridWorld");

Create a critic value function representation using a Q table derived from the environment observation and action specifications.

qTable = rlTable(getObservationInfo(env),getActionInfo(env));
critic = rlRepresentation(qTable);

Create a Q-learning agent using the specified critic value function and an epsilon value of 0.05.

opt = rlQAgentOptions;
opt.EpsilonGreedyExploration.Epsilon = 0.05;
agent = rlQAgent(critic,opt);

Input Arguments

collapse all

Critic network representation, specified as an rlTableRepresentation object created using rlRepresentation. For more information on creating critic representations, see Create Policy and Value Function Representations.

Agent options, specified as an rlQAgentOptions object.

Output Arguments

collapse all

Q-learning agent, returned as an rlQAgent object.

Introduced in R2019a