Deep Deterministic Policy Gradient Agents

The deep deterministic policy gradient (DDPG) algorithm is a model-free, online, off-policy reinforcement learning method. A DDPG agent is an actor-critic reinforcement learning agent that computes an optimal policy that maximizes the long-term reward.

For more information on the different types of reinforcement learning agents, see Reinforcement Learning Agents.

DDPG agents can be trained in environments with the following observation and action spaces.

Observation SpaceAction Space
Continuous or discreteContinuous

During training, a DDPG agent:

  • Updates the actor and critic properties at each time step during learning.

  • Stores past experience using a circular experience buffer. The agent updates the actor and critic using a mini-batch of experiences randomly sampled from the buffer.

  • Perturbs the action chosen by the policy using a stochastic noise model at each training step.

Actor and Critic Function

To estimate the policy and value function, a DDPG agent maintains four function approximators:

  • Actor μ(S) — The actor takes observation S and outputs the corresponding action that maximizes the long-term reward.

  • Target actor μ'(S) — To improve the stability of the optimization, the agent periodically updates the target actor based on the latest actor parameter values.

  • Critic Q(S,A) — The critic takes observation S and action A as inputs and outputs the corresponding expectation of the long-term reward.

  • Target critic Q'(S,A) — To improve the stability of the optimization, the agent periodically updates the target critic based on the latest critic parameter values.

Both Q(S,A) and Q'(S,A) have the same structure and parameterization, and both μ(S) and μ'(S) have the same structure and parameterization.

When training is complete, the trained optimal policy is stored in actor μ(S).

For more information on creating actors and critics for function approximation, see Create Policy and Value Function Representations.

Agent Creation

To create a DDPG agent:

  1. Create an actor representation object.

  2. Create a critic representation object.

  3. Specify agent options using the rlDDPGAgentOptions function.

  4. Create the agent using the rlDDPGAgent function.

For more information, see rlDDPGAgent and rlDDPGAgentOptions.

Training Algorithm

DDPG agents use the following training algorithm, in which they update their actor and critic models at each time step. To configure the training algorithm, specify options using rlDDPGAgentOptions.

  • Initialize the critic Q(S,A) with random parameter values θQ, and initialize the target critic with the same random parameter values: θQ'=θQ.

  • Initialize the actor μ(S) with random parameter values θμ, and initialize the target actor with the same parameter values: θμ'=θμ.

  • For each training time step:

    1. For the current observation S, select action A = μ(S) + N, where N is stochastic noise from the noise model. To configure the noise model, use the NoiseOptions option.

    2. Execute action A. Observe the reward R and next observation S'.

    3. Store the experience (S,A,R,S') in the experience buffer.

    4. Sample a random mini-batch of M experiences (Si,Ai,Ri,S'i) from the experience buffer. To specify M, use the MiniBatchSize option.

    5. If S'i is a terminal state, set the value function target yi to Ri. Otherwise set it to:

      yi=Ri+γQ'(Si',μ'(Si'|θμ)|θQ')

      The value function target is the sum of the experience reward Ri and the discounted future reward. To specify the discount factor γ, use the DiscountFactor option.

      To compute the cumulative reward, the agent first computes a next action by passing the next observation Si' from the sampled experience to the target actor. The agent finds the cumulative reward by passing the next action to the target critic.

    6. Update the critic parameters by minimizing the loss L across all sampled experiences.

      L=1Mi=1M(yiQ(Si,Ai|θQ))2

    7. Update the actor parameters using the following sampled policy gradient to maximize the expected discounted reward.

      θμJ1Mi=1MGaiGμiGai=AQ(Si,A|θQ)whereA=μ(Si|θμ)Gμi=θμμ(Si|θμ)

      Here, Gai is the gradient of the critic output with respect to the action computed by the actor network, and Gμi is the gradient of the actor output with respect to the actor parameters. Both gradients are evaluated for observation Si.

    8. Update the target actor and critic depending on the target update method (smoothing or periodic). To select the update method, use the TargetUpdateMethod option.

      θQ'=τθQ+(1τ)θQ'θμ'=τθμ+(1τ)θμ'(smoothing)θQ'=θQθμ'=θμ(periodic)

      By default, the agent uses target smoothing and updates the target actor and critic at every time step using smoothing factor τ. To specify the smoothing factor, use the TargetSmoothFactor option. Alternatively, you can update the target actor and critic periodically. To specify the number of episodes between target updates, use the TargetUpdateFrequency option.

For simplicity, this actor and critic updates in this algorithm show a gradient update using basic stochastic gradient descent. The actual gradient update method depends on the optimizer specified using rlRepresentationOptions.

References

[1] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra. “Continuous control with deep reinforcement learning,” International Conference on Learning Representations, 2016.

See Also

| |

Related Topics