setCritic

Set critic representation of reinforcement learning agent

Syntax

newAgent = setActor(oldAgent,critic)

Description

example

newAgent = setActor(oldAgent,critic) returns a new reinforcement learning agent, newAgent, that uses the specified critic representation. Apart from the critic representation, the new agent has the same configuration as the specified original agent, oldAgent.

Examples

collapse all

Assume that you have an existing trained reinforcement learning agent, agent.

Obtain the critic representation from the agent.

critic = getCritic(agent);

Obtain the learnable parameters from the critic.

params = getLearnableParameters(critic);

Modify the parameter values. For this example, simply multiply all of the parameters by 2.

modifiedParams = cellfun(@(x) x*2,params,'UniformOutput',false);

Set the parameter values of the critic to the new modified values.

critic = setLearnableParameterValues(critic,modifiedParams);

Set the critic in the agent to the new modified critic.

agent = setCritic(agent,critic);

Assume that you have an existing reinforcement learning agent, agent.

Further, assume that this agent has a critic representation that contains the following deep neural network structure.

originalCritic = [
        imageInputLayer([4 1 1],'Normalization','none','Name','state')
        fullyConnectedLayer(1,'Name','CriticFC')];

Create an actor representation with an additional fully connected layer.

criticNetwork = [
        imageInputLayer([4 1 1],'Normalization','none','Name','state')
        fullyConnectedLayer(3,'Name','x');
        fullyConnectedLayer(1,'Name','CriticFC')];
critic = rlRepresentation(criticNetwork,'Observation',{'state'},...
    getObservationInfo(env));

Set the critic representation of the agent to the new augmented critic.

agent = setCritic(critic);

Assume that you have an existing PG agent, agent, with a baseline critic representation. You can remove the baseline critic from the agent using setCritic.

agent = setCritic(agent,[]);

When you remove the baseline critic in this way, the UseBaseline option of the agent is automatically set to false.

Assume that you have an existing PG agent, agent, without a baseline critic representation. You can add a baseline critic to the agent using setCritic.

First, create a critic representation, assuming you have an existing critic network, criticNetwork.

baseline = rlRepresentation(criticNetwork,'Observation',{'state'},...
    getObservationInfo(env));

Then, set the critic in the agent.

agent = setCritic(agent,baseline);

When you add a baseline critic in this way, the UseBaseline option of the agent is automatically set to true.

Input Arguments

collapse all

Original reinforcement learning agent that contains an critic representation, specified as one of the following:

Critic representation object, returned as one of the following:

  • rlLayerRepresentation object for deep neural network representations

  • rlTableRepresentation object for value table or Q table representations

To create a critic representation, use one of the following methods:

Output Arguments

collapse all

Updated reinforcement learning agent, returned as an agent object that uses the specified critic representation. Apart from the actor representation, the new agent has the same configuration as oldAgent.

Introduced in R2019a