End-to-End Framework from Cloud to SoC for SDV Development - MATLAB
Video Player is loading.
Current Time 0:00
Duration 25:51
Loaded: 0.64%
Stream Type LIVE
Remaining Time 25:51
 
1x
  • Chapters
  • descriptions off, selected
  • en (Main), selected
    Video length is 25:51

    End-to-End Framework from Cloud to SoC for SDV Development

    Thomas Kleinhenz, Director of Automotive System and Software Architecture, Elektrobit

    With the split of hardware from software in automotive architectures, there will be a longer cadence for new hardware architecture, while software is released on much shorter cadence and able to cope with many generations of hardware in parallel. At system definition, one important aspect is evaluating whether a new software feature can be deployed and executed on an existing or new System On Chip (SoC).

    For that reason, Elektrobit offers our customers a framework based on MATLAB® and Simulink® that allows them to:

    • Analyze the feasibility of running a given use case on abstracted hardware
    • Run a high-level simulation to derive functional and nonfunctional requirements
    • Estimate and characterize system parameters like optimized sample rate of functions, MIPS, memory footprint, and power consumption
    • Define the interfaces, including bandwidth requirement and integration of third-party software
    • Perform an early integration of interfaces and building blocks
    • Provide a test harness to verify production software

    Furthermore, any framework must natively and seamlessly close the gap between simulation and virtual validation as well as real-world testing in a fully automated cloud environment. Therefore, we enable a shift left of the development through virtualization of the system design, covering the test of functional, nonfunctional, and integration requirements.

    This presentation demonstrates the complete development flow enabling an efficient virtual design of automotive application, starting from the cloud down to testing on real hardware.

    This framework complements Elektrobit’s end-to-end closed-loop environment in software and hardware for model-based application development. We streamline software-defined vehicle development by using MATLAB and Simulink for the exploration of innovative ideas and development of software functions, integration, and verification from cloud to SoC.

    Published: 3 Jun 2024

    What we will talk about, starting with some opportunities, challenges, some opportunities we have seen in context of the SDV, our thoughts how we can solve that, also then providing you some examples. You have heard us about automation and virtualization. If time allows, examples. And last but not least, and also the conclusions and potential next steps of how we can proceed.

    When you reflecting all the talks in the morning, it was talking about software, software running a vehicle. I would like to have the possibility to even add new features to a car or what is this out in the market. But the question is, can I be sure that I really can run this software? Because in the end, in software defined vehicle, the thing runs on hardware. And I can define whatever I can define. When it does not fit in my system, it's worthless.

    Which means when I start thinking about a SDV, it goes hand in hand that I can say, well, now, let's only focus on software. Forget the rest. No, I need to start thinking about the software, but I need to have a deep understanding of my system, of my hardware, which just host a software in the end.

    This implies that the system definition, this can be either when, for example, you start about a new system or new car, you have certain use cases in mind, certain applications in mind you would like to run. And then when it's about to select a SoC or if you even would like to do your own SoC, you need to understand what is the price tag for your application. So what do you need? The MIPS, the memories, the bandwidths, and so on.

    And the same holds true when you have a car out in the market. And you would like to-- your product manager is coming to you with a great new idea. How do you know that you can realize this idea? Of course, you can start. You can implement. You can integrate, and you turn it on. And it doesn't work. Then, you lose time and money.

    So point is at system definition, you have to think about the characterization of your system. What do you need? What is the MIPS for your application, for your feature, for your algorithms, for your use case?

    What is the memory? What is the bandwidth? And so on. In future, I believe even power consumption will become a new topic, because we put more and more and more into the car-- when you see the demand, we have, even nowadays, we can say, well, OK, I have enough power in the car. Maybe also in future, I'm not so sure.

    Maybe then the question is, can I reduce my power consumption? And if so, by what degree? Then maybe this question becomes important.

    But I know that one when I can define that one, of course, defining something is nice, but I need also have some means at hand that I can also evaluate it, that I'm sure what I have defined, what I have investigated, what I have thought through, I can also evaluate. Meaning that I-- oops. No, it works. Meaning that I need to have the possibility to do so, that I can either then-- about a new system can simulate it or when it's about an existing system, that it can take the model and see if I can validate my things.

    Hardware performance forecast. This goes hand-in-hand with what I told you before. I need to understand the price tag and this must be available when I do the evaluation of the system.

    Integration. Very often you win or you lose the battle towards SoP when it comes to integrating things together. And then when we see SDV when you remember the presentation from CDF with this collaboration model, we have for all the many partners, which are coming in the game, everybody is now shipping pieces, pieces there, pieces from there, pieces from there. And you need to bring this together. This can be a nightmare.

    And you can lose a lot of time. You can lose a lot of money there. So the point is when you think about at this level, your system, you need to have an understanding of your software to run, which means you have some kind of the functions in an abstracted way available that you can analyze it. But when you can do so, you are also able to define your interfaces, saying, hey, I would like to realize this use case.

    This requires that functional blocks in abstracted manner, not a real production software, but an abstracted manner. And I can start defining my interfaces, and then I can start distributing the things. Saying, hey, this function goes to render A, this goes to render B, the rest is coming from C. But I know I have-- my interface is defined. I can bring it together then.

    And perfectly. I would like to also to do so in the same environment that I can give it to my suppliers saying, hey, please do that. Functional wise, this is here, let's say boundary conditions in terms of MIPS and memory.

    And by the way, this is your test harness where you can integrate, pre-integrate your stuff to ensure that it really works. And then when you fulfill this one, then I will accept your software. Otherwise, go back and come back to me when you have solved it.

    OK, and cloud. Now, this is already state of art, maybe not worth mentioning. In the end verification, I would like also to have here all the possibilities. Virtual, non-virtual on development boards. So I would like to be flexible here.

    Whatever is available at a certain point in time, I will use it or whatever solves my problem modification wise. At best, I would like to solve it. So here, I see the strong need that we have here, the possibility to be flexible within high degrees of reuse to stay also within the price tag-- and the price competition is high in automotive. You know this better than me-- to address this.

    So all in all, when I sum this up, the requirements we see here from a solution which maybe fulfills the title of the presentation as well, I would like to offer something which allows me-- that I can analyze my use cases. I can analyze new features in a way that I can come up with the price tag, that I can do the simulation of the non-functional requirements, but I can also do simulation of the functional requirements. But is this maybe more standard? I'm able to do an early integration of the interfaces.

    And meaning by doing so our definition of the interfaces and an early integration of my components, I would like to have a test-harness I can provide to my supplier for their software development easily. And if possible, even provide me the complete software framework. Automation is king here to save time. So these are the requirements I see. And I would like to solve it here, because we are at MATLAB Conference using the MATLAB toolchain to build this abstracted ECUs, we need to do the analysis.

    I would like to be able to integrate this in an automated manner with the existing base system or with the design-based system. Go virtual or real, whatever is being required. And of course, cloud as an state of art, maybe we can remove cloud in future from the slides, because it's there. No reason to mention it.

    OK, how to solve that? Let's start with an empty page. It's always good. First thing, system definition. I have an use case in mind. I have a new feature in mind.

    And I have some hardware inputs. This can be either in hardware inputs from new platform or the description of the existing platform. So far, so good. Then of course, I start with the target-based system. So with my ECU development, I have here all these things in I need-- we typically have at, let's say, when it comes to base system.

    In a base system, there's not so much MATLAB [INAUDIBLE] inside. This is more classic stuff. In future, maybe I developed stuff. You never know. You have virtualization, and it has automation, and so on.

    And you go to validation, then you know it is base system. But it's only one part of the story. The complementary part is that you use the same inputs also for your application development. That I, as the application developer, know, OK, these are the boundary conditions hardware wise or this is my hardware at all. This is what I need to fulfill, and then I can start doing the work.

    I need to consider, of course, the cyber security, the things we have just learned that will come to this later on in one slide. Maybe I have some framework available I can reuse. Sure, I need some kind of abstraction when it comes to the hardware. Because from application perspective, many things you typically have on your ECU, you don't care. You just-- in principle what you're interested in, well, do I have enough computer resources available?

    And what is my memory, and maybe the bandwidth? Maybe not. It depends. But all the rest-- well, what is inside that lower part of an ECU is maybe not of interest to you. So you work with an abstracted model.

    And initially, you even work also with an abstracted model of an ECU software, let's say, of your application software in order to do your analysis. The analysis is, of course, the thing which is closest in the loop, because then you can say, well, either for a new system-- OK, the product manager, if we would like to support this feature, then I require the following things on the SoC. And you can have then these iterative discussions or when it is an existing feature, then we can say, well, yes, you would like to have this feature. Impossible, but we can talk about this version.

    Before you start with the actual development of the feature, I think this is key here. Once this job is done, of course, then validation and verification as an essential step of all the activities. What needs to be embedded nowadays is a native AI workflow. AI will become part of many solutions in future.

    So it should not be a kind of overlay to an existing framework. It must be natively integrated in existing-- in any framework, which are going to develop the application software, cybersecurity, functional safety as in the critical part, cloud, and co-simulation. Because the world is bigger than MATLAB. We should be honest here.

    We have things-- we have Tensorflow. We have Carla and so on. We need to find ways to interact with all of them to avoid island solution.

    OK. Let's change the perspective again. So what you have seen here, this is the system level perspective. This allows you, let's say, to analyze your system, do all the estimation. We have talked up front.

    Define the interfaces. And by doing so also, then maybe have some advantages when it comes to the integration. But what about the application perspective? Meaning is it next to it? Is it complimentary to it?

    Or is it the same view saying, hey, this is my system level perspective where I have the view on the, let's say, on the system parameters. And next to it, I have the application perspective, where you do more working on this functional requirements. You need to fulfill with your functions. And important is it is the same-- this is within the same environment, so that you can exchange the views depending where you focus is. Maybe your focus is initially more on the system perspective in order to characterize the system.

    And later on, you can then continue them with, let's say, functional implementation that you can exchange in the same environment without breaking that. You can also go back and forth to see, to validate whether your initial assumptions are still true or not. And if so, where you need to adapt.

    And here-- I'm sorry. I just pressed the wrong button. Just to emphasize, the key for that, that you can realize that in the same environment is always that you have the interfaces defined beforehand, because by that, you control the complete architecture.

    OK, Virtualization, automatization, important topic nowadays. High expectations on that in particular when it comes to virtualization. So what does it mean? We have briefly touched it in one of the talks beforehand.

    Just to summarize it again, these are the levels defined for those who have not seen it so far from 0 to even 5. But 5 is then the real physical ECU that you have these definitions being defined, the level of 0. This is typically-- when you think about MATLAB, you will say, well, I'm at level 0.

    I have just my application. I have just my algorithms. That's it. What should I take care of the rest? Because 0 is exactly addressing the algorithm model.

    Maybe one when it comes to the application level, where the algorithms are being embedded. But we need to go further. We need to ensure that we can really realize this SDV thing, in the end, you have heard about. We need to address all the levels in our system.

    Being able that can really verify all stages with the same model, with increasing test coverage, that my function is being there when I add now more and more functionality here. And this is what virtualization says more and more details also about the base System. And up till four in the end is the goal of these activities.

    You've seen it in the other slide that was saying, hey, we go up to three. This is the current stage of our activities that as Elektrobit, we say framework must go to three. Beyond that, it's more than in the area of the SoC vendor to kick in. Why is this important?

    You will see this now on the next slide. I talked about integration. I talked about the importance of the interface definition. And I will also talk about the importance about automation and of the integration process, because here you can save a lot if you try to automate as many steps as possible.

    Start with this example again. You have here, you develop your software component in MATLAB, Classic C code maybe, if with Gen AI we will see. And parallel to you, your colleagues from the platform teams, they start implementing the lower part or they start selecting the base system here. An example from classic base system, so where your component runs on top of the RT. So far, so good.

    If you now followed it in a traditional way, also like we did it in the past, then it's now the next step is to bring this manually together. Whenever you have an update here or there, this work, manual work being required by the integration team. And this, we would like to avoid, because I'm lazy. I would like to automate whatever is possible.

    So the idea is, well, if I what platform. And if I know how I can describe this platform in an abstracted manner, why they're not having the possibility that around my software component, I just generate a kind of glue software, glue layer, which allows me that I can bring my software, my application software with the platform and automatic manner together.

    It can be as a trace source or initially maybe in ROS 2 environment that I have just prototype my algorithm have ROS 2 available. Bring it together, and see if it runs. And later on, when the platform team is ready, then also bring them together with the same staff, with the same staff, but without being forced that I need to touch here a lot of manually.

    This is the idea behind here having the possibility that I can describe my platform in a way that I can automate my integration process. And if I have reached this step, then I'll be-- then I'm flexible, then I have my binary available. And then I can go either virtual thing here with a Virtual ECU, I put my application on top of an offline system running on the server or I can go on a real target device and thus the corresponding-- yeah, hopefully, only recreation s Because the later we are in this stage, the more we should more see regression rather than real verification activities.

    Related to the question about automation is, of course, also the question about the framework toolboxes libraries. Also today, in the morning, when you have seen this, this talk, when it was mentioned, we need change of the partnership model when it comes to SDV that, maybe we have to much stronger collaborate that everybody needs to reinvent everything by its own. But maybe, you can also share things.

    And this is a little bit the idea behind here. Of course, now starting with internal toolboxes, but also saying, hey, I do not need for each project to reinvent everything. I have frameworks available, I have libraries, pre-qualified libraries available. I have toolboxes available, which does an automatic outline of an Simulink model. That I have all the boxes there.

    Maybe not the inside. But I have at least all the boxes there. I have all the interface there. Purpose: automate as much as you can. Then, of course, you can use different things. So this is now just an example

    that we have realized it, saying, well, I have an library. I have a toolbox, which is more or less providing you some templates, some predefined, pre-qualified blocks. And I can activate the framework, which I in principle, just provide the architecture description of what I would like to realize, and then this will be laid out. And the design team can start.

    So meaning that also I maybe order from a DevOps team: Hey, I would like now to do this project. Please provide me with an initial setup of the model. I can then just add the details then from my algorithm design.

    And this can also be open source maybe. It does not have to come from Elektrobit as a viable product, but maybe also an open source. And then the SDV story from the beginning becomes more and more real. So what you can reuse maybe also from other parties in the end or other companies.

    We have just heard about cyber security. This is key. This is critical. But when you have ever did the exercise to do a entirely on your own manually, then you will realize it's painful. And the normal software designer does not work, doesn't want to do it.

    It's really not a nice exercise. So also here, automation is king. In a way, like we have just learned, that during the design process, I'm able to always verify where I am.

    So this requires that I need to have, let's say, a kind of framework in place, which allows me based on the MathWorks tooling that I have my attacks library, that I have my mechanisms implemented, which analyze my model, annotate the model, see if I run out the attacks If I run into any severe threats, that I can do this iteratively. I start with my design. I run the checks, oh, I'm still good.

    No, I'm not good. What I did wrong here. OK, now, let's fix this. To the next iteration now it fits. And if everything is done there, then I generate a TARA and can move on accordingly.

    And this is, I would say, key for cyber security that we have this as an integrated part of any framework. When we talk about SDV, it must be simply be there in the end. OK, examples in the last four minutes. So I'll try to be fast. Not that we really can now have our breakdown and some food, some three examples just very quickly here.

    No talk without AI. So first example would be on a TinyML realization in this framework. Hear more the performance aspect, because we were interested in what is the minimum price tag I need if I'm going to implement a TinyML solution compared to a full-blown solution, which is still satisfying. The requirements on the detection, just to understand what is the performance of it.

    Then as-- AI will be there. But never alone. At least not the next years. Although, an example from rule-based system, classic system, and also here a kind of evaluation. And yeah, last but not least, a cloud-based approach, because the framework has cloud in the title, so they should also be capable to run in the cloud.

    You can hardly see the video, I assume? No, it's not possible. If you have the offline version, then you can also maybe see the video in a better way. This was exactly the question. So I was interested in TinyML and can we use it for automotive as a good enough approach?

    Because very often, when we talk about AI, you have this monstrous implementation. You can do so-- it might be a little bit resource intensive. And the questions can I do it also with less? And this was the-- exactly the question here. Can I do with less?

    In particular, with less performance demands. As a performance in terms of physical performance, MIPS, memory, and so on, and still have something which fulfills the requirements. What's one example we have used in order to prove that the complete flow is working? As you have seen in this overall picture.

    Same exercise here. Here, we were also looking into the system performance analysis from classic system as a classic component rule-based system just to understand where do I lose my performance. Because we saw, hey, this is-- we cannot deploy this on the existing system. Where is the critical part of my implementation, which consumes all the resources.

    At last, but not least then-- but this is just known without any video. Just seeing, well, yeah, cloud must be there. So it must be able that I cannot only run this on premises, but I can also run this on a cloud environment and I can do my-- development, that I can build my containers, implement and execute the code here in the example on QEMU evaluation for an ARM64.

    So what is the conclusion? And what is next? There's always a next step, of course.

    The conclusion is the system definition, as mentioned, is important that you get a decent understanding of your software feature, not only in terms of the functional requirements, but also in terms of the non-functional requirements in order to ensure that you can really deploy the owner SoC. Or if you decide a new SoC, then the innovative SoC team will ask you, hey, what do you need? You need to have an answer in place.

    An end-to-end framework, I believe if you do it in the right way with this integration, with this automation, with this virtualization, it can really help you in order to realize the shift left after the development. This is the major goal we all would like to achieve-- being faster, being more cost competitive compared in particular with the competition coming from Asia. Do not need years, but only months in order to roll out new features and things like I've just presented might help here.

    And of course, we tried this centered around MATLAB Simulink. We believe there's a strong potential inside. And we have seen that-- with many things we have in our mind, we can realize based on MATLAB Simulink.

    And of course, there's always more work to be done and more analysis capabilities, in particular when it comes to power consumption. I think this is something, I believe, could be a strong addition to our own system understanding dealing with a new feature.

    Dealing with all vECU models. At the moment, we stop at level 3. We'd like to proceed. But this is something where we need to team up with also with SoC vendors in order to do so. And of course, cloud, cloud, cloud.

    We are never ready with the cloud. Whenever you think you're done, then the next thing is there and you just continue then updating your cloud capabilities. And that's it.

    [APPLAUSE]

    View more related videos