Reinforcement Learning and EnergyPlus Toolbox error: cannot set the timestep property of class 'mlep' because it is read only
3 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
Hi everyone,
I'm currently doing a research on HVAC control using Reinforcement Learning. The HVAC system and the building is modelled in EnergyPlus, while the RL is implemented in Matlab. Between Simulink and EnergyPlus, I use the EnergyPlus Co-Simulation Toolbox (EnergyPlus Co-simulation Toolbox - File Exchange - MATLAB Central (mathworks.com)).
However, few lines of error messages popped out while I was training my agent. Here's the error messages:
Error using rl.train.SeriesTrainer/run
An error occurred while running the simulation for model 'uniform' with
the following RL agent blocks:
uniform/RL Agent
Error in rl.train.TrainingManager/train (line 429)
run(trainer);
Error in rl.train.TrainingManager/run (line 218)
train(this);
Error in rl.agent.AbstractAgent/train (line 83)
trainingResult = run(trainMgr,checkpoint);
Caused by:
Error using rl.env.internal.reportSimulinkSimError
Cannot restore the operating point of the block 'uniform/EnergyPlus
Simulation/mlep System Object'
Error using rl.env.internal.reportSimulinkSimError
MATLAB System block 'uniform/EnergyPlus Simulation/mlep System
Object' error occurred when invoking 'loadObjectImpl' method of
'mlep'. The error was thrown from '
'C:\Program
Files\MATLAB\R2022a\toolbox\matlab\system\+matlab\System.p' at
line 0
'C:\Program
Files\MATLAB\R2022a\toolbox\simulink\simulationinput_desktop\+Simulink\+Simulation\+internal\DesktopSimHelper.p'
at line 0
'C:\Program
Files\MATLAB\R2022a\toolbox\simulink\simulationinput_desktop\+Simulink\+Simulation\+internal\DesktopSimHelper.p'
at line 0
'C:\Program
Files\MATLAB\R2022a\toolbox\simulink\simulationinput\+Simulink\SimulationInput.p'
at line 0
'C:\Program
Files\MATLAB\R2022a\toolbox\rl\rl\+rl\+env\+internal\SimulinkSimulator.m'
at line 259
'C:\Program
Files\MATLAB\R2022a\toolbox\rl\rl\+rl\+env\+internal\SimulinkSimulator.m'
at line 171
'C:\Program
Files\MATLAB\R2022a\toolbox\multisim\+MultiSim\+internal\runSingleSim.p'
at line 0
'C:\Program
Files\MATLAB\R2022a\toolbox\multisim\+MultiSim\+internal\SimulationRunnerSerial.p'
at line 0
'C:\Program
Files\MATLAB\R2022a\toolbox\multisim\+MultiSim\+internal\SimulationRunnerSerial.p'
at line 0
'C:\Program
Files\MATLAB\R2022a\toolbox\simulink\core\general\+Simulink\SimulationManager.p'
at line 0
'C:\Program
Files\MATLAB\R2022a\toolbox\simulink\core\general\+Simulink\SimulationManagerEngine.p'
at line 0
'C:\Program
Files\MATLAB\R2022a\toolbox\rl\rl\+rl\+env\+internal\SimulinkSimulator.m'
at line 172
'C:\Program
Files\MATLAB\R2022a\toolbox\rl\rl\+rl\+env\+internal\SimulinkSimulator.m'
at line 78
'C:\Program
Files\MATLAB\R2022a\toolbox\rl\rl\+rl\+env\+internal\AbstractSimulator.m'
at line 30
'C:\Program
Files\MATLAB\R2022a\toolbox\rl\rl\+rl\+env\@AbstractEnv\runEpisode.m'
at line 144
'C:\Program
Files\MATLAB\R2022a\toolbox\rl\rl\+rl\+train\SeriesTrainer.m' at
line 32
'C:\Program
Files\MATLAB\R2022a\toolbox\rl\rl\+rl\+train\TrainingManager.m'
at line 429
'C:\Program
Files\MATLAB\R2022a\toolbox\rl\rl\+rl\+train\TrainingManager.m'
at line 218
'C:\Program
Files\MATLAB\R2022a\toolbox\rl\rl\+rl\+agent\@AbstractAgent\train.m'
at line 83
'D:\MATLAB Drive\Tesis\Uniform\uniform1.mlx'
at line 62'.
Error using rl.env.internal.reportSimulinkSimError
Unable to set the 'timestep' property of class 'mlep' because
it is read-only.
And also, here's the code of the RL Agent I'm using:
mdl = 'uniform';
actionInfo = rlNumericSpec([1 1], "LowerLimit", 22, "UpperLimit", 26);
actionInfo.Name = 'temperatur';
numActions = actionInfo.Dimension(1);
observationInfo = rlNumericSpec([9 1]);
observationInfo.Name = 'observations';
observationInfo.Description = 'T1, T2, T3, T4, T5, T6, T7, Pp, Pl';
numObservations = observationInfo.Dimension(1);
env = rlSimulinkEnv(mdl,[mdl '/RL Agent'],observationInfo,actionInfo);
statePath = [
featureInputLayer(numObservations,'Normalization','none','Name','State')
fullyConnectedLayer(50,'Name','CriticStateFC1')
reluLayer('Name','CriticRelu1')
fullyConnectedLayer(25,'Name','CriticStateFC2')];
actionPath = [
featureInputLayer(numActions,'Normalization','none','Name','Action')
fullyConnectedLayer(25,'Name','CriticActionFC1')];
commonPath = [
additionLayer(2,'Name','add')
reluLayer('Name','CriticCommonRelu')
fullyConnectedLayer(1,'Name','CriticOutput')];
criticNetwork = layerGraph();
criticNetwork = addLayers(criticNetwork,statePath);
criticNetwork = addLayers(criticNetwork,actionPath);
criticNetwork = addLayers(criticNetwork,commonPath);
criticNetwork = connectLayers(criticNetwork,'CriticStateFC2','add/in1');
criticNetwork = connectLayers(criticNetwork,'CriticActionFC1','add/in2');
critic = rlQValueRepresentation(criticNetwork,observationInfo,actionInfo,'Observation',{'State'},'Action',{'Action'});
actorNetwork = [
featureInputLayer(numObservations,'Normalization','none','Name','State')
fullyConnectedLayer(3, 'Name','actorFC')
tanhLayer('Name','actorTanh')
fullyConnectedLayer(numActions,'Name','Action')
];
actor = rlDeterministicActorRepresentation(actorNetwork,observationInfo,actionInfo,'Observation',{'State'},'Action',{'Action'});
agentOpts = rlDDPGAgentOptions(...
'SampleTime',-1,...
'TargetSmoothFactor',1e-3,...
'DiscountFactor',0.99, ...
'MiniBatchSize',64, ...
'ExperienceBufferLength',1e6);
agentOpts.NoiseOptions.Variance = 0.3;
agentOpts.NoiseOptions.VarianceDecayRate = 1e-5;
agentOpts.NoiseOptions.MeanAttractionConstant = 1.5;
agent = rlDDPGAgent(actor,critic);
maxepisodes = 500;
maxsteps = 300;
trainOpts = rlTrainingOptions(...
'MaxEpisodes',maxepisodes, ...
'MaxStepsPerEpisode',maxsteps, ...
'ScoreAveragingWindowLength',20, ...
'Verbose',false, ...
'Plots','training-progress',...
'StopTrainingCriteria','AverageReward',...
'StopTrainingValue',-1);
trainingStats = train(agent,env,trainOpts);
Could anyone please help me with this error? Thank you
0 comentarios
Respuestas (1)
Vatsal
el 27 de Mzo. de 2024
Hi,
The error message you are encountering, "Unable to set the 'timestep' property of class 'mlep' because it is read-only," indicates an attempt to alter a property of the mlep class that is defined as read-only. As per the documentation at https://github.com/dostaji4/EnergyPlus-co-simulation-toolbox/blob/master/%40mlep/mlep.m, the 'timestep' property is marked with 'SetAccess=protected' and 'Nontunable'. This designation means the 'timestep' property's value cannot be directly changed from outside the class definition once the object has been instantiated. It is intended to be set only within the class's methods or during the object's initialization, not modified by users or through external functions.
For instance, attempting to set the timestep like so would result in an error because the property is protected:
ep.timestep = 60; % This would cause an error because timestep is protected
To resolve or avoid this error, ensure that there are no lines of code attempting to directly modify the timestep property of an mlep object after its creation.
For additional information, you might find the following resources useful:
I hope this helps!
0 comentarios
Ver también
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!