AI with Model-Based Design: Reduced-Order Modeling
Martin Büchel, Senior Application Engineer, MathWorks
During vehicle development, high-fidelity models such as those based on finite element analysis, computer-aided engineering, and computational fluid dynamics are created for a variety of components. However, these high-fidelity models are not suitable for all stages of the development process. For example, a finite element analysis model that is useful for detailed component design will be too slow to include in system-level simulations for verifying your control system or to perform system analyses that require many simulation runs. Similarly, a high-fidelity model for the thermal behavior of a battery will be too slow to run in real time on your embedded system.
Does this mean you have to start from scratch to create faster approximations of your high-fidelity models? This is where reduced-order modeling (ROM) comes to the rescue. ROM is a set of computational techniques that helps you reuse your high-fidelity models to create faster-running, lower-fidelity approximations.
This talk focuses on AI-based ROM techniques and methods and how they can be leveraged for Model-Based Design. Discover how to leverage the Simulink® add-on for reduced-order modeling to set up design of experiments, generate input-output data, and train and evaluate suitable reduced-order models using preconfigured templates that cover various ROM techniques. Learn how to integrate these AI models into your Simulink simulations, whether for hardware-in-the-loop testing or deployment to embedded systems for virtual sensor applications. Explore the pros and cons of different ROM approaches to help you choose the best one for your next project.
Published: 3 Jun 2024
I'm very excited to be here with you today to talk about AI with model-based design. More in detail, I want to dive into a topic with you which is called reduced-order modeling. Now, the challenge many engineers are facing is that high-fidelity models might be too slow to calculate for the particular use case.
Here is where artificial intelligence techniques can be used to create faster Reduced-Order Models, or ROMs, as they are also called, in short. MATLAB and Simulink enables an engineers to create ROMs without much prior AI knowledge.
Now, what is reduced-order modeling? Imagine you're given a full-order high-fidelity model. This might, for example, be given in a finite element analysis tool or CFD tool, which is slow to calculate. Now, reduced-order modeling is to create a simplified representation of this full-order model, the reduced-order model, which should have reduced computational complexity.
But at the same time, it should preserve the dominant behavior. In practice, the full-order model-- finding a reduced-order model from that full-order model is a trade-off between accuracy, fidelity, and the computation speed, the inference speed of that model.
There are several ways to achieve reduced-order models. And AI-based data-driven methods are just one set of tools which you can use. There, you want to learn the input/output behavior between these inputs and the output with an AI model using machine learning techniques.
Other techniques exist, like linearization, for example, or model-based, physically based principles, techniques. But today, we're only focusing on the first part, the AI driven. I just wanted to mention that there are different sets of techniques as well available.
The applications for reduced-order modeling is mostly one of the most prominent use case is that engineers want to perform simulations and software in the loop, hardware in the loop, or processor in the loop tests. And therefore, of course, real time-- we need real-time-capable models to do that.
Very often, you're having a plant model. And if only one component of the plant model is too slow to calculate, you cannot run it in real time. And that is where you can leverage ROM techniques, also AI-based ROM, techniques to substitute this full-auto model. And then you're able to perform hardware in the loop testing on your real-time machine, for example, then to test it against a control algorithm which you want to deploy onto a hardware-- onto your target hardware.
Another use case is virtual sensor modeling. Here, a virtual sensor can again be learned using machine learning techniques, which then might be used as an input for a controller.
A third use case is, for example, when you have a control design which also requires a prediction model. This is especially the case for nonlinear model predictive control, where the predictive already said that you want to predict several time steps into the future with that model. And the optimizer needs to iteratively solve the optimal control problem.
This requires to call that prediction model many times in each time step to iterate and converge to the optimal solution. And this is why this model even has to be way faster than real-time to be able to solve this use case. Again, here, reduced-order models can be used to serve as a prediction model in the MPC.
In the remainder of this talk, I want to dive into an example with you where we want to replace a high-fidelity jet engine turbine blade model with an AI-based, reduced-order model. It's about a closed-loop temperature control design, where we start with a Simulink simulation with the following blocks.
We have one block providing temperature and pressure conditions. These are then used in the controller but also in the plant model. The plant model is a finite element analysis model which computes the maximum tip displacement of that turbine blade. The use case is to design this closed-loop temperature control and use this tip displacement as a constraint in the MPC. The visualization part is also there for, as it says, visualization purposes.
Now, the jet engine turbine blade model component computes the displacement of the tip at the blade by first solving the transient heat equation. And this is to compute the temperature distribution of the blades. And then the structural equations are solved to compute the deformation from which the maximum displacement can be derived. So this is due to a combination of the thermal expansion and the pressure.
Well, in this example, we are looking at-- the plant model is modeled in PDE Toolbox. We want to highlight that it could also be imported from any third-party finite element analysis or CFD tool, using, for example, functional mock-up units or S functions.
Now, I think I don't have to mention that in this case, the computational time needed to solve this component is too slow to run it even in real-time. And that's why it's not suitable for control design nor for HIL testing. This is why we want to replace it by an AI model, with the inputs of ambient temperature and pressure, cooling temperature to calculate the output, maximum displacement.
Now, to train this AI model, we need data. And to create this data, we can use the Simulink model to then vary the inputs to cover the input space as best as possible. So then you have sufficient data collected where we can train this model on.
In order to help with this process needed and the work steps, we are happy to introduce a new Simulink add-on, which is especially designed for reduced-order modeling. You can download this from our Home page. The workflow for this reduced-order modeling app is as follows.
We first need to design the experiments. So this is, how do we excite the system? How do we vary the inputs so that we cover the input space as best as we can, as already said? And then we have to run the experiments.
So this is running simulations to collect the input/output data. This input/output data then is used to train the reduced-order model, or more of them, to then select the best one to export it. And then you can import it into Simulink and easily deploy it to hardware for hardware in the loop testing onto your embedded device, for example, and many more.
Let's dive into this workflow once again from the beginning a bit more in detail. For the design of experiments, you first have to define what are inputs, what are the outputs in order that it knows which is the data it should train on. And you can do this by selecting, easily in the Simulink model, the inputs. Tell it, it is an input, and so on.
But you can also define not just the inputs and outputs of the reduced-order model but also so-called simulation inputs, which are then used and can be separately used to excite the system. This is especially interesting for use cases where the ROM system is part of a closed-loop controller and you cannot or may not want to excite this part of the system directly but have the target values, reference values for the controller doing that.
In the next step, you can then define if you want to replace or perturb these inputs. And you have to define the range of data with which the inputs should be covered and excited.
Now, to run the experiments, we already said, it takes quite a long time to collect the data from this original model. That's why you might want to use the functionality to parallelize it and therefore leverage the parallel resources either on your own computer or running it, for example, in the cloud. So you can set the option, use parallelization, and then start the simulation.
And now we are, of course, jumping directly to the result. Again, this takes a while. And what we see is the input/output data in a graph showing inputs and outputs over time. This data can now be seen as labeled data for training the neural network in supervised fashion.
The app provides three different types, currently, of neural networks. And they all have in common that they're not only capable of capturing the static behavior but also the transient, dynamic behavior of the system. These are LSTMs, nonlinear arc systems, and also a type called neural state space model, which is the one I want to dive a bit deeper into in the next slide.
So what are neural state space models, which are also known as neural ODEs? They're based on a representation of the system which is very well known in control design, which is a state space representation, where we have a state function here called f and an output function called g, on this formula of the system of equation on the left.
Now, the idea of the neural ODEs is to approximate those functions f and g using a state network and an output network and then train the weights of those using the input/output data which is collected before. Now, while the parameters, the weights of these networks can be trained automatically using the optimization, there are other parameters called hyperparameters, which typically, it's a very common use case that you want to predefine what those are.
So the hyperparameters, for example, could be, how many layers do I have in my network? How many neurons do I have in each of those layers? Things like that. And you might want to predefine a set of combinations of those hyperparameters and then run hyperparameter sweeps to then find out-- train different models and find out the best one. So it's an empirical science.
Now, the app makes it easy to define those hyperparameter sweeps and then, again, let it run if you have parallel resources available or if you have cloud resources available in parallel and in the cloud. And then you have a nice overview of the progress of the learning. And once the learning is done, you can see how is my test metric and then compare to find the best model you want to choose for deployment in the next step.
Now you're only one mouse click away from exporting the network. And then you can easily import that into Simulink by just dragging the block, in this case, for the neural state space models into Simulink, telling it what is the parameter, where can you find the data for that. And then you're set to go and perform simulations and deploy your model onto hardware.
Looking at the results for this simulation, where we compare the high-fidelity model result with the neural state space model, we can see that it captures quite well the dynamics of the original model. And we can also see that for the entire simulation, it takes 30,000 seconds to calculate but only 30 milliseconds for the neural state space model. So that's six orders of magnitude faster than the original one.
And this is exactly what we wanted to achieve. We have a fast model. But it still captures the dynamics and has a quite good accuracy. This is why we can now deploy it into the controller and generate code for the target platform as well as letting the plant model, for example, run on a real-time computer, and then, together with Simulink, run the hardware in the loop tests, monitor signals, adjust parameters, and so on.
With this, I'm already at the end of my presentation. I just want to summarize that it's many times a challenge that high-fidelity models might be too slow to calculate for your particular use case. But there are AI techniques which can help you to create faster reduced-order models. And our tool set enables engineers to create ROMs without prior AI knowledge. Now I'm looking forward to your questions. I'm at the end of my presentation. And I thank you for the attention.
[APPLAUSE]