Tune PI Controller Using Reinforcement Learning
8 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
嘻嘻
el 18 de Oct. de 2023
Respondida: Emmanouil Tzorakoleftherakis
el 23 de Oct. de 2023
How is the initial value of the weight of this neural network determined? If I want to change my PI controller to a PID controller, do I just add another weight to this row that is initialGain = single([1e-3 2])?
This code is from the demo "Tune PI Controller Using Reinforcement Learning."
initialGain = single([1e-3 2]);
actorNet = [
featureInputLayer(numObs)
fullyConnectedPILayer(initialGain,'ActOutLyr')
];
actorNet = dlnetwork(actorNet);
actor = rlContinuousDeterministicActor(actorNet,obsInfo,actInfo);
Can my network be changed to look like the following:
actorNet= [
featureInputLayer(numObs)
fullyConnectedPILayer(randi([-60,60],1,3), 'Action')]
3 comentarios
Respuesta aceptada
Emmanouil Tzorakoleftherakis
el 23 de Oct. de 2023
I also replied to the other thread. The fullyConnectedPILayer is a custom layer provided in the example - you can open it and see how it's implemented. So you can certainly add a third weight for the D term, but you will most likely run into other issues (e.g. how to approximate the error derivative)
0 comentarios
Más respuestas (0)
Ver también
Categorías
Más información sobre Function Approximation and Clustering en Help Center y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!