rlContinuousDeterministicRewardFunction
Deterministic reward function approximator object for neural network-based environment
Since R2022a
Description
When creating a neural network-based environment using rlNeuralNetworkEnvironment
, you can specify the reward function approximator using
an rlContinuousDeterministicRewardFunction
object. Do so when you do not know
a ground-truth reward signal for your environment but you expect the reward signal to be
deterministic.
The reward function approximator object uses a deep neural network as internal approximation model to predict the reward signal for the environment given one of the following input combinations.
Observations, actions, and next observations
Observations and actions
Actions and next observations
Next observations
To specify a stochastic reward function, use an rlContinuousGaussianRewardFunction
object.
Creation
Syntax
Description
creates the deterministic reward function approximator object
rwdFcnAppx
= rlContinuousDeterministicRewardFunction(net
,observationInfo
,actionInfo
,Name=Value
)rwdFcnAppx
using the deep neural network net
and sets the ObservationInfo
and ActionInfo
properties.
When creating a reward function you must specify the names of the deep neural network inputs using one of the following combinations of name-value pair arguments.
ObservationInputNames
,ActionInputNames
, andNextObservationInputNames
ObservationInputNames
andActionInputNames
ActionInputNames
andNextObservationInputNames
NextObservationInputNames
You can also specify the UseDevice
property using and an optional
name-value pair argument. For example, to use a GPU for prediction, specify
UseDevice="gpu"
.
Input Arguments
Properties
Object Functions
rlNeuralNetworkEnvironment | Environment model with deep neural network transition models |
Examples
Version History
Introduced in R2022a