Get critic representation from reinforcement learning agent


critic = getCritic(agent)



critic = getCritic(agent) returns the critic representation object for the specified reinforcement learning agent.


collapse all

Assume that you have an existing trained reinforcement learning agent, agent.

Obtain the critic representation from the agent.

critic = getCritic(agent);

Obtain the learnable parameters from the critic.

params = getLearnableParameters(critic);

Modify the parameter values. For this example, simply multiply all of the parameters by 2.

modifiedParams = cellfun(@(x) x*2,params,'UniformOutput',false);

Set the parameter values of the critic to the new modified values.

critic = setLearnableParameterValues(critic,modifiedParams);

Set the critic in the agent to the new modified critic.

agent = setCritic(agent,critic);

Assume that you have an existing reinforcement learning agent, agent.

Further, assume that this agent has a critic representation that contains the following deep neural network structure.

originalCritic = [
        imageInputLayer([4 1 1],'Normalization','none','Name','state')

Create an actor representation with an additional fully connected layer.

criticNetwork = [
        imageInputLayer([4 1 1],'Normalization','none','Name','state')
critic = rlRepresentation(criticNetwork,'Observation',{'state'},...

Set the critic representation of the agent to the new augmented critic.

agent = setCritic(critic);

Input Arguments

collapse all

Reinforcement learning agent that contains a critic representation, specified as one of the following:

Output Arguments

collapse all

Critic representation object, returned as one of the following:

  • rlLayerRepresentation object for deep neural network representations

  • rlTableRepresentation object for value table or Q table representations

Introduced in R2019a