Future of Engineering Design in the Age of AI - MATLAB
Video Player is loading.
Current Time 0:00
Duration 32:56
Loaded: 0.50%
Stream Type LIVE
Remaining Time 32:56
 
1x
  • Chapters
  • descriptions off, selected
  • en (Main), selected
    Video length is 32:56

    Future of Engineering Design in the Age of AI

    Mehran Mestchian, Engineering Director, Control Design Automation, MathWorks

    This keynote explores the transformative impact of AI on engineering design, highlighting its role in fostering innovation beyond traditional bounds. AI's capacity for generating novel solutions is not only enhancing efficiency but also redefining creativity and problem-solving in engineering. We are at a pivotal moment where the essence of design creativity intersects with the need for technological precision, marking a fundamental shift in the design process, the designer's role, and the tools used in engineering. This presentation will focus on three key transformations: the evolution of the design loop, the changing role of designers in an AI-driven world, and the advancement of engineering design tools. Together, these shifts represent a new era in engineering design, where AI acts as both a catalyst for innovation and a bridge between imagination and realization.

    Published: 4 Jun 2024

    I stands for intelligence, right? I really don't like I in the AI. What does intelligence mean? Emotional intelligence is important. So I have a lot of empathy with the people in the back. So if the slides are-- the fonts are too small, just close your eyes.

    [LAUGHTER]

    OK, next slide. I'm here. I'm doing it here. Basically, this is the summary of the whole talk, so you can go to sleep after this one. We have a relationship with AI at MathWorks. I'm going to talk a little bit about that to warm you up. Then there's a case that AI-driven engineering systems, it fits well with the concept of model-based design. So I'm going to try to connect them together.

    There's a speculation in this talk about the future of software engineering. And I'm going to make a claim that it actually resembles model-based calibration that some of you guys might be familiar with. It's been going on forever in the automotive industry. And, yes, there will be some speed bumps along the way, which I'm going to maybe pick two or three of them, time permitting. And, yes, methods, processes, and then tools, as Christine mentioned.

    So I'm going to talk a little bit about if the workflows are being AI enhanced, what does it mean in terms of tools? And I'm going to give you a framework, how in MathWorks we're thinking about it, how that might apply to you and your own works as well. So that's what it is. Did you know we developed a romantic relationship with AI starting in 1992? Myself, I started my romantic relationship when I was doing my PhD, about 1986. I was working in control systems, but my professor was Mark Davis. He was a stochastic control guy at Imperial College.

    And because he was stochastic control, then you clearly see what is the latest. Then was these artificial neural networks that were at least at Imperial College. They were competing with fuzzy logic for those of you who remember. So I have a romantic relationship with AI as well. Anyways, so the romantic relationship in this toolbox, it created a medium of exchange for ideas for developing new algorithms. And so this was a buzz in the research community.

    And 20 years passed, and our love affair at MathWorks became a marriage. Now we're committed to it. It's a lifetime dedication to AI. It's not going to go away for MathWorks. The highlights that you see here-- by the way, my eyes don't see because I have progressive glasses. So this is all fuzzy to me. I'm making it up as I'm going. So whatever is highlighted there, just want to make sure that's the area that I'm particularly involved in because I have the overall responsibility for targeting.

    There was a question on Edge SI on the deployment side. So you see a lot of code generation there. But overall, this is a huge investment that MathWorks has been going on. Megatrends start slowly, then suddenly. So that's the suddenly part starting 2015, 2016. And typically, the megatrends are a fusion of technology breakthroughs, innovations-- more than one. And then there always has to be a latent market need for it. So this is what's happening for it.

    Some of these products are specific to AI-related activities, and some of them have functionalities that have been enhanced for applications in AI. In my own area, in code generation, we have GPU coder, but it has specific optimizations for applications in AI. We have a whole team dedicated to that. MATLAB Coder for the CPU targeting. And then we have deep learning HDL code. That is for SOC. Question, was it Edge AI? Yes, there's a lot of pruning and quantization, and we can get into that there.

    So anyways, the reason to start with, we saw that there were applications. And that was our initial love affair. There is a potential to do impossible things with AIs. And this is to solve certain problems that were very difficult, or rather impossible, practically to do before. And that was the initial part of it. Yeah, maybe three, four years ago, we saw our tools now were being used for systematic applications, deployment of AI in systems. So it is a big change for us as MathWorks.

    We always try to look at the megatrends as much as we can and guess what's going to happen in the future. But as soon as the adoption for us, when we see systematic use-- when we see that, then we got to think about workflows. So that was the change for us. And so the title of this talk is a forcing function for me. And at the core of it, when you look at the workflow and systematic use of particular megatrend, you got to go back to first principles. So this is a first principle.

    I've spent a lot of time on this slide. This is the most expensive slide on this whole deck, OK? Six months on and off, OK? Every word here is precious to me, so challenge it so I can improve it, please. I can't read it, so I'm just going to-- data-driven shaping of preferred solutions within constraints. A lot of words there. I don't how to say that in German. All right, let's break it up.

    What is design to you? Think about it. Close your eyes. Think, what is design? Well, it's an intentional shaping of things, ideas, anything. It's the intentional shaping of things into a preferred form. So there's a human intention there, all right? That's design. Engineering. What is engineering to us? Well, engineering is problem-solving. It is application of science and mathematics to solve problems.

    OK, so design is shaping of preferred things, and engineering is creating solutions, but within constraints. Both design and engineering, they have to be done in the context of reality. Even if you're doing painting, nothing to do with engineering, you're trying to shape some chemical things with different color scheme into some preferred form on a canvas within the constraints of the canvas. So engineering and design, they go together. So that is my definition of engineering design.

    All right, so now adding AI. What is AI in that? It doesn't change it. It's just data-driven. So what this tells me is that I became very comfortable with the notion of AI of any kind, by the way, in the system, in the tools, anywhere else. The principles of engineering design don't change, but the practices have to change, like anything else before. Not a big deal, really, when you look at it that way. All of a sudden, I undressed the whole thing. I became comfortable. AI, Gen AI, and all that.

    And I think that's how we feel at MathWorks. So the love affair continues. All right, now what is specific about AI is, in effect, it's a lot of parameters. Even if you really understand the mathematics-- and the mathematics is simple, right? It's just a dot product, functions in a matrix environment. MATLAB loves it, right? But it's black box. Lots of parameters. As opposed to gray box, where the structure resembles the physics and you want to fit the parameters, or first principles where you're looking at equations of different kind in a variety of high-level languages.

    I like Simulink. I kind of like Stateflow. And Simscape is also another area under my control at MathWorks. And I love Simscape. All right, so that's what it is. So in terms of model-based design, we have a range from black box models to white box, first principle models, and everything in between. Kalman, autoregressive, functions, all that stuff, all this in between, and use them all. Lookup tables. Look at that. Black box. What's the difference between AI and a lookup table when you look at it from a distance?

    When you look at the actual bits of ones and zeros, there's very little difference, but there's a lot of difference in terms of its semantics and how it works. So anyways, coming back to AI in model-based design, which is where I'm in as a tool provider, focusing on design. You see in the middle-- I don't know if you see it in the back. It doesn't matter. But design, there are two basic way to look at what does AI mean in model-based design. One, for it to resemble, as a black box model, physics.

    Reduced order of something that is perhaps too slow to run in simulation. Or fitting a model that you couldn't fit before because you don't understand the first principles, or it's very difficult to come up with them first. Takes too much time. So that's one side of it. It's essentially AI for component modeling, we call it. The other one is algorithm design, features. Things that go on the controller side or on the deployment, Edge AI side.

    And when we saw systematic use of AI for, for instance, virtual sensors, we've seen an increasing use of it everywhere. It's interesting. Object detection. All the things that were almost impossible 20 years back. When we see that, that's on the algorithm side. So these are two basic uses of AI we see within the context of our tools, in particular model-based design environment. So that gives you that it does have a good fit. And, by the way, you just take AI out, replace it by lookup tables. From certain distance, not much different.

    When you go into the actual design activities associated with AI within model-based design, there are four pillars. Each of these have a lot of iteration loops. You have to prepare the data. You've got to clean it up. You have to figure out what is the structure of your AI model and maybe import it, bring it into model-based design, too. There are many methods to bring it into the MBD environment from maybe pre-trained model. And we worked on that. And some of you guys are using it.

    But whatever it means, if you bring it in, you have the opportunity of power of two return on investment, meaning you can do more than one thing with it, which makes it worthwhile for you. And there's more than one thing you can do here. So test and simulation is very important because, as an algorithm, you need to close the loop. You got to do all that. So it's traditional model-based design, but the pillars are-- there are some specific aspects to it. And data cleansing is probably the most important part. And deployment.

    There are specific things you need to do for deployment. It is not your ordinary automatic code that you need to generate. You got to figure out what it means in terms of neural engines of different kind, or even taking advantage of existing silicon. But if that is true, then what does it mean for software engineering, at least in the context of software that goes in cars for you?

    I've been looking at that, and I do not see much difference between Edge AI in terms of the work process, in terms of the frame of thinking. I don't see much difference between the activities involved with targeting Edge AI in cars and model-based calibration. Now I'm not an expert on model-based calibration, so I would love you to challenge this. But I draw these lines. I just did it a few weeks ago. There are probably more lines here. Human insight. Design of experiment.

    Model-based calibration, by the way, is a combination of traditional calibration techniques at the model level, but you got to design an experiment around it. So I'm just generalizing that concept, applying it to AI within model-based design, and I am proposing it as a potential way to think about new way to do software. Not programming, but generating far fancier lookup tables. Yeah, that should relax you down. It's not so intimidating.

    All right, so you design your experiment. Then, same as model-based calibration, you got to figure out, where am I going to get the data? The data could be synthetically generated from a finite element analysis, brought in into the loop, or it could be real. Driving a car around the track. So data acquisition is very important to model-based calibration, and so it is to the workflows involving AI. Very similar.

    Then you got to clean up that data. It's noisy. It's messy. This, that, all that. You guys are good at that with model-based calibration. Some of the stuff that's happening in automotive is-- you guys think that you're behind the times. I don't think so. Just go to the next door and talk to your model-based calibration guys. You'll be amazed, really. And then you do calibration generation. OK, so that's on the deployment side.

    I see a lot of analogy there. So that's my thesis to you. That's the framework of how we can generalize and optimize the lessons learned, both good and bad, in our history with model-based calibration and apply it to systematic integration of AI in automotive systems. What does that mean, though? If that is true, what does that mean? What's the future in the age of AI? Three things I'm going to talk about.

    One, engineered system or things you make. There has to be a reason to do it. It shouldn't be done just because it's cool. That's not how it works. What's happening is there is what I call a huge need for future-proofing of engineered systems, where you actually don't know the final design. You know the initial design, but the final design is always evolving where the system is already deployed, it's already been-- the car has already been bought.

    This is what I call soft design. Not software design. Soft design, but it involves huge amount of software. It means that the design itself may take shape and evolve over time with the product already out in the market. So you need to be adapt to that. What do we do? Less knobs, more sensors. That's what I'm saying. That's what you guys are doing. OK, so with that in mind, within that context of that's the latent need in mind, what does Edge AI integration mean?

    Well, in terms of software engineering, I was talking last night to some of the colleagues, and I was wondering, what is the proportion on the ECU, on the old engine control units. How much of the ROM, the software, was lookup tables versus programmed code? And it was a large portion. I'm not a car guy, so you guys know it was a large, large portion. And people were not happy about it. Black boxes and all that stuff. So there was a tension there, they said.

    And normally, innovation comes in the arbitrage between you need to be within limits of practicality. It's the same thing here, I think. What are the implications for software engineering? Here's a suggestion. A future job description for somebody who wants to engineer. We're looking for a software engineer in a fantastic automotive market. Just name your company there. And they have to have experience in model-based design. They have to have experience in calibration systems, standards, and applied knowledge of data sciences. Applied knowledge.

    What's the difference between that and now? You want to hire a controls engineer. Do you want them to have a PhD in linear algebra? They're not going to anything about the cars. They're going to use MATLAB for that. So that's a very similar thing. So that's the thesis of it. And data matters. Data really matters. You guys know it already, so I'm not going to-- but there are speed bumps. There are some potential roadblocks and speed bumps. And one of them is systematic VMV. There are many, perhaps. But the three I'm going to have time to talk about is systematic VMV, certification, and data quality.

    I'm also responsible for many of the formal, static, and dynamic analysis techniques for VMV at MathWorks. And this one is a hard one. To be able to map the traditional way of doing formal analysis and systematic, dynamic testing is very hard on neural networks. We're trying to figure it out. We're engaged with a variety of folks. So, for instance, what does coverage mean? There are some formal notions being developed. We're looking at activation functions and things like that. But just the scale, the number of these neurons is so huge. What are you going to do with that?

    What's the equivalence of MC/DC coverage? All of you are probably familiar with that. You have to do it. So, OK, what is modified condition decision coverage? Things like that. Explainability. Traceability. But do not lose hope. When we first looked at model-based coverage, one of the very first things we had to do was to look at lookup tables. It's not the traditional code coverage. We had to come up with our own concept of what does coverage mean in a multi-dimensional lookup table. That is burned into the product at MathWorks.

    We're working on this. So we're working on this. Second speed bump, regulatory issues. Some bad news, some good news. Which one do you want? Good news? The aerospace community is intensely working on this, and we're involved with them. So this is the EASA. This is the European aerospace community. The amount of effort they put in and-- they're constantly engaging us. We're trying to learn. There's publications coming on. And they're looking at, for instance, what does design assurance level C mean in the age of AI?

    DAL C is equivalent to ACLB for you guys. this is DAL C's ACLB, in effect. So, yeah, it's between B and C, but it's mostly B as far as I know. So they said, OK, people want to use an autonomous system, the pilots, so they can efficiently park the airplane on the tarmac. They want to go into production now. But the standard isn't out there. So what do they do? They say, OK, let's use the architectural mitigation strategy of having this similar DAL D, which in your case would be ACLC.

    Yeah, it would be equivalent of ACLC. Let's have a have dissimilar ACLC components and have an architecture. This is not new. This has been going on in aerospace for a while. They say that, all right, so I have a DAL C system architecture where the camera system, the acquisition system, and all the software has already been qualified at your equivalent of ACLB. I also have the voting logic, the comparison, and the actuation qualified. It's the neural networks in between.

    But if I can have a systematic process of independently developing two completely different neural networks with different train sets and I can demonstrate and convince the auditor I've done so, I put them together, maybe I can get a DAL C qualification. This has been done in the past in other areas in aerospace. So this is the mitigation process until the standard is out and we can do better than this redundancy, which is-- could be aerospace might afford it, but maybe in automotive, you can't.

    Oh, I've got four minutes left and I'm almost there. OK, so the other mitigation is quality of data. Quality of data requires a lot of work. And as I said, you should look at your calibration engineers. They're just next door, and they have a lot of good ideas. In the AI-specific area, automatic data labeling in virtual closed-loop environment is a good idea. And we have some apps for that. And also, scenario modeling. With RoadRunner, you can create scenarios and you can scrub data, actually. You can actually figure out differences between that and what you see in the real-time. And there's a bunch of tools and apps that we constantly are working on that.

    So that was that. Now let's go to how we should be improving our tools at MathWorks, how we should think about it. Already, the presentations we had this morning, you heard implicitly there are lots of loops. mentioned it. Kristin mentioned it. Loops everywhere. Everywhere. In fact, the whole ASPICE process is continuous improvement.

    It's a multi-objective optimization. What else is there? You guys are engineer. When you boiled it down, there's almost that is not multi-objective optimization. Solving problems within constraints. Remember the definition. And the diagram is too small, but all the arrows are bidirectional. So the loops within loops within loops. Hierarchical loops. Sometimes, human in the loop. Sometimes, fully automated. These loops are interlinked. They are spread in time and space.

    DevOps we heard about. And in AI, preparation and modeling, testing and deployment. There are lots of loops there as well. But that's too busy. I need to come up with it as a tool vendor. I need to come up with slowing it down, going to first principles. Here's my first principle. Every loop is one of these two. Every engineering design loop. Either you have the human in the loop doing abductive logic, high-level reasoning, setting the goals within the loop. Or on the other side, the human engineer and the designer is outside the loop designing the loop.

    So you either design in the loop or you are in the loop. I can't think of anything else. Well, that makes it easy for a tool vendor. Means that, OK, the AI, I can look at it either as an algorithmic inference within the loop, so it's fully automated, or it could be a speculative statistical assistant, a co-pilot of sort, replacing some of my colleagues in terms of my work with them. Or it would enhance my intelligence, my activities as I am. That's my loop. Integrates with other loops. So that's how we look at it.

    Algorithmic inferences within the design loops and outside. And what it means in terms of how we are tackling it, in terms of all the hundreds of design loops that can be looked at in the context of how you use our environment, there are basically three-pronged strategy we have developed. One is to improve the existing workflows, which basically means improving your knowledge, learning whilst you're doing the work.

    And that also includes some of the stuff we just talked about. I think Kristen was talking about requirements, elicitation, that type of stuff. I think you had it as number one. And we think that's the case as well. And that prong we're acting on right now, both internally and you will see some results soon. So, yes. So that's consistent with what Kristin said. The second prong is to do with if you want to have your own large language model, all your companies have your own strategy here. I'm sure you're going to develop your own strategy.

    You might want to plug in your own LLM, but we need to be able to empower that to be-- for your LLM to be integrated into our environment. So that's the second strategy. And the third one, which is mind boggling, as I said, for code generation, we actually have customers who want to generate production code for medium language to even large language models. One of those customers is MathWorks. We're eating our own dog food. Right now, I have two teams who are generating automatic code for targeting to embedding inside our tools. And so that's the .

    OK, last slide. This is my last slide. I just want to say, the love story continues. There is a good fit for AI in MBD. I call it an integrated box model-based design. You can do this. Just go talk to your calibration engineers and imagine things. And speed bumps, we're working on it. And there are lots of things that will come just around the corner. I just can't say it in a public setting. That's it.

    [APPLAUSE]

    View more related videos