AD/ADAS Country-Based Virtual Validation Using Real-World Data - MATLAB & Simulink
Video Player is loading.
Current Time 0:00
Duration 19:21
Loaded: 0.85%
Stream Type LIVE
Remaining Time 19:21
 
1x
  • Chapters
  • descriptions off, selected
  • en (Main), selected
    Video length is 19:21

    AD/ADAS Country-Based Virtual Validation Using Real-World Data

    Prof. Dr. Reza Rezaei, Manager, Virtual Validation of Autonomous Driving Systems, IAV

    Accurate 3D simulation models that can represent country-specific features are key to accelerating the development and virtual validation of innovative ADAS/AD functions. This presentation from IAV illustrates how to enhance 3D models with real-world test data by applying AI methods with MATLAB®, Simulink®, and RoadRunner. The presentation will cover how to:

    • Make simulation models accurate under different conditions and countries. This can be done using realistic camera/radar/lidar modeling approaches, IAV competencies on perception modeling, virtual world creation, and generation of road networks and realistic assets for country-based validation.
    • Create realistic scenes and scenarios from real-world test—including data augmentation, data analysis, AI-based generation of 3D models—both manually and programmatically. The outlook is to include multi-agent simulation.
    • Combine all-new methods to create a complete solution package.

    Published: 3 Jun 2024

    So starting this presentation, first of all, with an introduction of the current status of the regulation. So how the regulation for level 3-- level 3-plus or level 3, level 4 autonomous vehicles are demanding. So what would be the future development in the field of virtual validation?

    How should we prepare ourselves in the sense of modeling and simulation in order to fulfill their future regulations? And then I would like to give some examples which we developed during our collaboration and very nice technical exchange with the MathWorks. So here, I would like to thank Advait Simona and the other colleagues from MathWorks in order to develop this showcase to show how we can extract the scenarios from real field testing and how we can use high-fidelity simulation for virtual validation of autonomous vehicles.

    So starting from the regulation we heard today about the level 3-plus autonomous vehicles. So it means that there is no need for the driver. And the regulation for the homologation and type approval testing is quite in the development. So I could bring today part of the regulation, just to give you an highlight of the main points of the regulation.

    So first of all, we see that the topic simulation or virtual validation, virtual evaluation of the functionalities and test of the functionality in a real-world environment, is given and demanded by the regulation. And we see that the regulation is looking for the simulation credibility. There is credibility assessment framework defined in the European Regulation 2022/1426, which is not exactly defined, but at least gives us the trend, how to show that.

    In this slide we see a part of the regulation which says that the simulation shall allow a visualization to a degree of accuracy which matches the required fidelity level. So it means we need this credibility, we need accurate models. But it is not exactly defined in this type of approval regulation, how accurate it should be, what are the KPIs, how to make an evaluation?

    And in case we have a little bit deviation, how much deviation is OK, and from which time would be critical. And this is exactly the point where the multiple projects in ASOM are working for that. For example, this VVM project in ASOM standardization and so on.

    And we at IAV, as a global engineering company, are developing also our own solutions. In this case, we'd be using high-fidelity sensor models, leveraging the IAV tier 1 network in order to build up accurate sensor models, et cetera. So the first highlight point is trying to make high-fidelity models in order to represent the reality in a way that to say we can validate our functionalities of autonomous vehicles.

    This slide is quite interesting. If you look there, these are telling two important points regarding scenarios and coverage. So the first point is saying that the OEMs or the manufacturers should show the evidence of the scenarios which are used for homologation or type approval tests after the level 3 vehicles.

    So it means that the OEM or the manufacturer should bring the argumentation and evidence why this driving scenario is selected and why this test catalog is defined. So it's quite game-changing in the level three that we need to find out what we need also to document and show this evidence why we need to test this vehicle on such conditions. That's the first point.

    And the second, and even more interesting, point is mentioning that the scenarios shall be sufficient to cover the operating drive domain or should the manufacturers should show that the coverage is good enough. And this is also a point which we are asking ourselves, how much coverage is enough, how this coverage should look. And that's coming from these two points.

    So the ideas, which we are looking and thinking, a good methodology to fulfill the future regulation is to look into the fleet data looking into the data of the prototype vehicles. In case we have a possibility to process the fleet data, it would be also good to look into the real world vehicle data, and find out or extract the critical scenarios. And with such analysis, we can first of all show the evidence of the scenarios because we have seen such a critical scenario in the real field. And in addition, using the simulation, we will have the possibility to make a variation of such critical situations to ensure that the functionalities are robust enough to work under such critical conditions. And in addition, if the operating condition weather conditions are changed, its robust enough.

    Now, the question would be how the simulation or virtual validation platform looks. Starting from the left side, these are the three important points which I try to highlight today. We need real-world scene and scenarios in order to represent the reality. So we need, for example, in this case using RoadRunner, or in the case of MathWorks combination in the co-simulation with the Unreal Engine, we can build up the real-world scene. And for the scenarios, I will show some examples how we can extract real-world scenarios from the testing.

    You see on the top left one example, which we took from a kindergarten in Japan, built up in the simulation world and made multiple variations. So the first point in this tool chain is real-world scenes and scenarios. And of course we would like to model the real world, real location, multiple countries. We need accurate maps, good assets, and of course a tool like RoadRunner where we can implement such tools, such maps, assets, and going for country-specific visual validation.

    So using such a tool like RoadRunner will help us to model the reality in the real world in the simulation. And on top of course, vehicle model-- if you are looking for the vehicle dynamics and so on. And in addition, as mentioned, the sensor model is to have the maximum reality as possible, to have the reality or the real-world representation in our visual validation tool chain.

    And then it goes further with Simulink-- for example, using Simulink as a system in the test to couple it with the real AD/ADAS ACU and having the real functionalities. And on the right side, you can see the tools and the automatization tool chain which are required to run the tools in an automatic way. So this is to say a short summary of the tool chain which is used for real world. And from the next slide, I would like to give some practical examples and showcases how it looks like.

    OK, now coming to the major point scenario extraction. In order to validate and evaluate the autonomous driving functionalities in the real world, we need real-world test data. And the idea here is to take such real-world test data and make a digital twin out of the real-world testing.

    So this slide is showing us how we could use MATLAB MathProg toolchain to start from an open source data set-- in this case, PandaSet. Have the camera radar, LiDAR, and GPS data. And use a specific process which we developed with MathWorks together to extract the scene, top left, from the reality real-time world testing to the top right, the simulation board. So we made a digital twin out of the tested scene.

    And this is real-world testing. We have the real position using Google Maps and multiple functionalities available in RoadRunner. We can build up the real street. So it means, with using this methodology, we will be able to test the vehicles under realistic operating conditions to analyze the test, to understand where we had problems with which functionalities, and also extracting the critical scenarios which are really measured in the test.

    And this is the next slide. So besides extracting the scene, we can also extract and reconstruct the scenario. So you see on top the LiDAR measurement from the view of the vehicle. So the LiDAR could detect three cars.

    And using this methodology, we can detect the car trajectory, velocity. And at the end, we have the scenario, how the other non vehicles are derived, what is happening to analyze the situation, and as mentioned to do multiple variation on top. So with these two steps, we would be able to take real-world scenarios and test our functionalities with real-world scenes and scenarios.

    But the real-world tests cannot include everything. So they could also be very dangerous or really rare critical driving conditions, like traffic light violation or critical distance to pedestrian or aggressive driver behavior, or any other things which are not happening in the reality, hopefully, and are safety-relevant operating conditions. Or we are discussing about the safety of internet functionalities and misuse of some functionalities. And we would like to test our functionalities one step beyond and say, OK, we have the automatization methodology.

    As you see on top right, we are able to change the assets, asset position, number of the assets, and so on and try to simulate aggressive driver behavior, violation of the traffic lights, and for example, jumping in front of a AD vehicle, which is happened in San Francisco multiple times. If you read the Washington Post, there are multiple articles with the people jumping in the front of the cars to just-- I don't know, for fun or anything. And the AD cars should be really tested under such aggressive conditions as well. And that's exactly the last step to ensure the robustness of the system to handle such critical situations.

    Talking about simulation or extension of the methodology for different countries, this is also interesting to discuss as IAV. We are working as a global engineering company in multiple countries. Japan is a very important business market for us, as well as in Germany. And the question would be how we can use our methodology and adopts our methodology in order to consider country-specific traffic rules, road layout, traffic signs, traffic lights and so on.

    You see on the top left, this specific Japanese traffic lights. If you have red here and button showing the green, then you can go ahead for this line here-- even the red one. And the bottom one is showing the green, so you can go ahead and turn left.

    So this is a difference in the traffic light compared to Germany. And that would be the question here, how we can test our AD vehicles with different traffic lights and traffic rules. And this, we have done as a showcase with RoadRunner, where we have the chance to use the traffic lights or traffic signs here and other country-specific operating conditions to test under different operating conditions and make the autonomous driving test for a global market.

    It could also be even the case that we say, OK, we want to test autonomous vehicles for a specific location in the world. So this is just an a showcase where we simulated Japan, Tokyo. It's a condo near our IAV office.

    This is something here, and we have this a specific junction in Japan where we use here the RoadRunner to create the road network. You can use OpenStreetMap open source, or here HD map. And then using Google maps, or if you have some images from that specific junction, you can define the assets in a realistic way.

    As mentioned, there are multiple assets available in a RoadRunner. In case we could not find the assets, there would be an opportunity or a chance to make here customized assets. And with that we can go for country-specific visual validation where we look into different cities, different countries and go for the test of the vehicle under such operating conditions. And for that, after building up such a simulation environment, we can go for further variations like variation of the weather and the perception system.

    And the last step which I would like to show in this presentation is the topic with the corner cases. So let's assume we made the scenes and scenarios, and we would like to consider the corner cases associated with the perception system. So this example is showing it for the camera. So in the case of camera, we multiple corner cases from the real world testing, like regular effect, lighting conditions, dynamic range effect and so on-- soiling, droplets, rain droplets on the camera, and such. Corner cases could be extracted from the real-world testing. And using MATLAB or Unreal Engine, it would be possible to model such a critical situation-- corner cases and combine it with the real-world simulation to take into account how the perception corner cases can affect their function behavior.

    Another example comes from the LiDAR Toolbox from MATLAB, where we are able to use a semi-physical modeling where we can see the reflection-- so this white car, which can reflect even more than this black car here and so on. And this is possible to consider multiple effects in the real field associated with the LiDAR like gust reflection as mentioned, scattering absorption, and so on, and multiple field challenges like occluded objects and so on, which are happening in the real-world testing. So the idea would be, at the end of the summary, to build up the real world as it is using physical sensor models, and then use the corner cases out of the experiences of the real-world testing which are representing the edge cases and critical situations in the perception system to combine and have overall system testing.

    Coming into the summary, we see from the highlights from the regulation, we need good accuracy, enough accuracy, and realistic scenarios. And with that, I have shown some showcases where we looked into the country-specific variation and proof of concept for extraction of scene and scenarios. From my point of view, this is the most important point for the future of the level 3 autonomous vehicles that we would be able to extract the scenarios from the real-world testing.

    As an outlook-- a very interesting point which we are working on at the moment at IAV is the using of AI and large language models in the whole of the development process and the virtual validation of autonomous vehicles to improve our methodology, to accelerate the testing process. So this is a work ongoing using GPT-- for example, GPT 4 or any other large language models, we will be able to read the requirements, read the test specification, or even extract the scenarios from the real world. Since we are dealing with large language models, we can understand the scenario description of the test description of the scenario, extracting the important parameters.

    In combination with IAV scenario database, we can make an automatic scenario and scene creation in the simulation-- for example, MATLAB toolchain, Comerica, or dSPACE. It is in principle independent of the simulation platform. And this is very interesting next steps, which I would like to show in the next conference, how we can use large language models to improve, and change the testing and virtual validation in the future. Thank you very much.

    [APPLAUSE]

    View more related videos