Accelerating Automated Driving Hardware and Software Development using MATLAB & Simulink - MATLAB
Video Player is loading.
Current Time 0:00
Duration 1:07:35
Loaded: 0.24%
Stream Type LIVE
Remaining Time 1:07:35
 
1x
  • Chapters
  • descriptions off, selected
  • captions off, selected
  • en (Main), selected
    Video length is 1:07:37

    Accelerating Automated Driving Hardware and Software Development using MATLAB & Simulink

    From the series: MathWorks Wireless Series: Transforming the Next Generation of Wireless Communication

    Overview

    ADAS and autonomous driving systems are redefining the automotive industry and changing all aspects of transportation, from daily commutes to long-haul trucking. This emerging megatrend has redefined the requirements for sensors and compute platforms leading to the development of new technologies such as deep learning, sensor fusion, lidar, and V2X. Additionally, developing hardware and software solutions for ADAS applications is challenging to validate due to the vast, complex, and diverse set of scenarios that need to be covered during testing.

    MATLAB & Simulink provides many reference examples and prebuilt algorithms for computer vision, radar, lidar processing, sensor fusion, path planning, and controls to accelerate the design of automated driving system functionalities. For early verification of the hardware and software functionality of these systems you can integrate EDA simulation environments with synthetic sensor models and roads scenarios to perform virtual field tests in simulation.

    Highlights

    • Sensor modelling and perception algorithm development around vision, radar & lidar sensors
    • Sensor fusion and tracking for localization and object tracking
    • Planning and controls for ADAS applications (AEB, ACC, LKA, etc.)
    • Virtual scenario creation and sensor modeling for closed loop simulation
    • Prototyping and early verification using automatic code generation and EDA tool integration
    • Integration workflows for Python, ROS, CARLA

    About the Presenters

    Dr Rishu Gupta is a senior application engineer at MathWorks India. He primarily focuses on automated driving and artificial intelligence domains. Rishu has over 10+ years of experience working on applications related to visual contents. He has previously worked as a scientist at LG Soft India in the Research and Development unit.

    Sumit Garg is a Senior Application Engineer at MathWorks India specializing in design analysis and implementation of Radar signal processing and data processing applications. He works closely with customers across domains to help them use MATLAB® and Simulink® in their workflows. He has over nine years of industrial experience in the design and development of hardware and software applications in the radar domain. He has been a part of the complete lifecycle of projects pertaining to aerospace and defense applications. Prior to joining MathWorks, he worked for Bharat Electronics Limited (BEL) and Electronics and Radar Development Establishment (LRDE) as a senior engineer.

    Recorded: 30 Jun 2022

    Yes, I can see it.

    Awesome.

    Thank you, Prashant, and good afternoon, everybody and thanks for joining the session today. In today's session, we will be talking about the overview of how you can build your automated driving stack and also do the validation of automated, sub-components of automated driving application. And also how you can accelerate your entire automated driving application development while you are doing the software development and while you are trying to take it to the hardware.

    So if you start with the journey of ADAS itself. And if you talk about, SAE levels of automation, there are six different levels of automation that are proposed by SAE. Level 0 is where there is no automation. L1 is where there is a partial automation, either it has to be a longitudinal control or it has to be a lateral control, which means, in a way, either it could be AEB or it could be lane assist.

    L2 is where you will have both lateral, as well as longitudinal control. But driver has to be attentive all the time. L3 starts to incorporate some bits of automation where driver can not pay attention during the drive and in some portions of the drive, car can handle itself on its own. Now with L4 and L5, it starts a very high level of automation, wherein, in the case of L4, while if the car is not able to drive in certain scenarios, it will automatically take itself and park on a secure location.

    However, in L5, there are no such locations as well. And it is expected for car to be fully efficient in every driving scenario. It's a long way for all of us to reach to the L5 level and we are making incremental progress towards that. In today's session, we will see how we can accelerate the development of automated driving applications. Now if we start to delve deeper into the automated driving systems and we look at the key sub-components of automated driving, then there are-- algorithm wise-- there are four.

    Perception, Sensor Fusion, Path Planning Navigation, and Decision & Controls. Now if I give you the overview of both what perception and these individual sub-components are. Perception is basically understanding how your environment is around you. And perception is then with the help of a lot of sensors, like it could be multiple radar, multiple cameras, and multiple LiDAR.

    After perception comes sensor fusion. All of these sensors will sometimes give you raw data or they'll give you a track list or an object list, but it is your responsibility to bring that information into a unified framework, which can be understood by your control unit. So that is called a sensor fusion, fusing the data coming in from multiple sensors.

    The third is once you understand the environment around you, now you need to plan for going point A to point B. The is path planning and navigation. Now you know your environment and know you want to navigate your vehicle from point A to point B. Now once you understand your environment, you understand your path.

    Now the control signals will be passed through the vehicle dynamics in the form of whether it needs to steer, brake or accelerate, depending upon however the scenario is. And when we close this entire loop, something of this sort happens. And here, you can see Mercedes driving on roads with testing their ADAS systems of lane centering.

    So let's try to understand about what are the challenges when somebody is trying to build such a complex system? First challenge is there are a lot of different subsystems, there are a lot of different domains. There could be computer science engineer, mechanical engineer and a lot of different engineers working together. While you are working on automated driving, it's not only the conventional algorithms, also you need to integrate the AI algorithms because what we have observed recently is AI gives much higher accuracy when it comes to perception algorithms like detection, semantic segmentation, and many others.

    So you need to club all of these together, the conventional algorithms, as well as integrating the AI algorithms into your complex systems. There could be a lot of different tools that you are working on. And there could be version incompatibilities, there could be library conflicts. A lot of other conflicts that you may face while you are working on these different tools. Some of the tools may not be compatible with each other, as well.

    Testing of the automated driving systems. While you are developing these automated driving systems, testing these automated driving systems in the complex environment also becomes very important and very challenging because you cannot take your vehicle every time and go and drive in these diverse conditions. It can be super expensive, it can be fatal, as well.

    There are long list of edge cases, which are available, which needs to be tested in order for a vehicle to be proven for roads And also the dependency on the hardware and different kind of vehicle prototypes. There are a lot of different hardware that are available in the market. And right now, the industry itself is trying to figure out which hardware, which architecture would be best for automated driving or ADAS applications.

    Now one of the things which has came up is simulation can be key to addressing these challenges. We'll talk about virtual simulation in detail today. But how simulation can be key is with the help of simulation in your design phase itself, you can do a lot of development and also start to do the testing of the algorithms by front-loading or shifting left your algorithms in the early process itself.

    And during that entire cycle of the development, whether you are doing integrated testing or you are doing unit testing of any sub-components, you can shift lift and do the verification at that level itself. You will have increased coverage of the edge cases. You can create a lot of edge cases inside a simulated scenario and you can continue to test on those edge cases unless or until you are confident. It can be repeatable. It will be repeatable, repeatability will be there so that you improve your algorithms. You come back, you retest, and find out whether your algorithms work or not, or there are improvements or not.

    It is certainly cost effective because you don't need to do the deployment on the real car, real hardware. And you don't need to go and do the testing in the real environment, which can be fatal, as well as detrimental to cost. And also this is something which has been recognized by many of the regulators, that simulation based testing is the key to automated driving.

    Just give me one second. I'll be back with water. Sorry, can you hear me?

    Yes, yes.

    Thank you. So now regulators are also recognizing the need for simulation based testing. And here is where we have worked with to use simulation as a platform for approval of ADAS and autonomous vehicles. Now when we talk about this autonomous, or simulation based testing for automated driving development, the first thing that comes to our mind is how do we create such environments?

    So first thing that is required for automated driving testing is you need to bring this real environment to the virtual world. And by virtual world, what we mean is there should be a scene that we can create that should look realistic and very close to the real world environment. We should be able to bring the vehicle dynamics as close as possible to my real vehicle.

    We should be able to put in the dynamic actors inside the scene so that we can simulate the scenario and understand the real behavior. And also, we need sensor models like camera, radar, LiDAR. Which ones kept once mounted on the ego-vehicle can visualize the environment and passing the sensor data. Other than the virtual worlds, you also see what is most important are different kinds of algorithms. Now virtual work can give you the feel of the real environment.

    Now, you can do your algorithm development on the simulated data or you can also do the algorithm development with the help of the real data that you are collecting on field. Once this entire algorithms are developed, you may need to also generate the code out of these applications so that you can do software testing, or you can take that generated code and apply them on individual targets and see how they are behaving on target. So you want to do testing in the model environment while you are doing algorithm development. And also you may want to do the testing of the development algorithm in CC++ code.

    And all of these things should happen in one development platform. You should be able to analyze, simulate, design, deploy, integrate all of these sub-components and also test all of these sub-components in one integrated environment for everything to be efficient and simulation-based scene to happen. Now let me take one of the motivational example of highway lane following.

    So in this example of Highway lane following, let me start with the test bench. And here, the test bench contains a scenario, which contains a scene and the target vehicle trajectories. And there is vehicle poses. And after that, there are sensors, which can be camera, LiDAR, radar, and many others. Now after this environment that you have created with scene target vehicles and sensor, now there are algorithms like perception algorithms for vision, sensor fusion and then controller development and then finally, ego vehicle dynamics.

    And everything should be running in the closed loop. So you should be getting the output from the ego vehicle and going it back into the scenario itself. And when you are running all of these in the simulated environment, this is how it may look like. On the left top, you can see highway lane following test bench. You are seeing the lane detection from the camera sensor, which is mounted on the ego vehicle.

    You are seeing the detection's out of that camera reading that is coming from the mounted camera. And you are also seeing the scope for velocity and how different parameters within the vehicle are modifying or changing. So you need to bring all of these pieces together into a simulation environment in order to test your automated driving feature or a sub-component. Now let's go ahead and look at today's agenda.

    In today's agenda, we are going to talk about how can I synthesize scenarios or how can we bring a real world into a simulated environment for testing of these automated driving applications? How can it develop algorithms when we are working on perception and stack, or sensor fusion algorithms, or planning algorithms, or control algorithms. And the last is, once I have developed these algorithms, tested these algorithms at the model level, how I can implement, how I can test these algorithms with the help of the generated code in an integrated manner.

    So let's just start with the first point, which is how can I synthesize my scenarios? Now when it comes to synthesizing scenarios, there are multiple challenges. First challenge itself is creating complex 3D scenes with a very high fidelity environment details. And also modeling these non-ideal sensors into the simulated environment. The second could be there are a lot of different components. There is sensor, there is scenario, there is vehicle dynamics, there is power-train. Then there are perception and different kind of algorithms.

    There could be different tools, there could be different components. How to bring all of these together? Now while you are doing the simulation, you may also want to control the simulation speed as well as control the fidelity of the simulation. You may not want to go to the 3D realistic environment while you are in the early phases of development.

    However, when you are in the advanced phase of development, you may want to go to the 3D realistic environment. How quickly can you scale to those different levels. The last challenge would be in order to test your automated driving systems, you should be able to create lots and lots of different scenarios. So creating these large number of scenarios in order to test your autonomous vehicle also is a significant amount of challenge.

    Now let's go ahead and talk about how MathWorks can help while you are building your scenes or scenarios. So before going into scenes and scenarios, let's first understand what is the difference between scene and a scenario? Seen constitutes of the static elements in the environment. These are static elements. Could be road networks, lanes, signboards, traffic cones, or anything that doesn't move in their environment.

    However, all the dynamic elements in the environment like vehicle, pedestrian, bicycle, or anything that moves is a dynamic component. So you create a static scene and then you incorporate these dynamic elements in the static scene to convert it into a scenario which can be used for testing. Now going a little bit more detail into what is an environment, and here is what I'm showing you is Roadrunner, which allows you to create very high fidelity 2D realistic environment.

    You can start to create a road network. Simply, you can have your lane behaviors, lane and modifications as you want. You can modify the junctions, lanes as per your requirements. You can have whatever kind of junctions you want. There are different varieties of junction available.

    Many times, you may need to change the road elevation, road profile. Different kind of junctions that needs to be there, different kind of prompts that you may need to add within these props, you may need to also add different kind of texture because all the times these props may not look identical. You may also want to make these scenes look realistic by introducing props like cones, trees, bushes.

    So all these things can be very easily moderated inside RoadRunner. Not only that, there are a lot of different speed sets available based on the certain locations and geographies. You can start doing all of these scenes, all of these components together and then stitch your scene together.

    Once you have created your scene, you can also bring in the traffic lights. Not only you can create the traffic lights, you can also, once created, operationalize those traffic lights and understand how a junction will behave once the traffic light operates. Now you have created a scene inside a Roadrunner, which you can create with the help of manual tasks. You can go ahead and you can start to create the scene.

    However, if you have some data around the scene you want to create, we can also bring in that data and start to stitch your scene together. For example, if you have an open drive road network-- road network in the form of an open drive file, you can directly bring that road network to stitch your scenario together, or a scene together. If you have a demographic information around the scene or a scenario, you can start to bring those point clouds or temp files, or alter imagery data in order to create the scenes.

    Also once you've created these scenes inside RoadRunner, you can export these scenes into different formats, which can be directly ingested by different third party simulators. Those could range from CARLA, Unreal Car Maker, NVIDIA Drive SIM and many others. Also RoadRunner comes with RoadRunner asset library, which will allow you to introduce a lot of different assets, making your scene or a scenario very close to the real world environment.

    If you have access to maps services like Here HD live maps, Zenrin maps, TomTom maps, or OpenStreetMap, which is open source as well, you can bring in the map information from these maps services and directly create large landscapes of the scene that you wanted to test your automated application in. Multiple users across the globe are already using RoadRunner to create your simulation environment. For example, Porsche is using RoadRunner to create the virtual platform for 88 assimulations.

    Similarly, Roadrunner is being used by NVIDIAs Omniverse for creating scenes for their automated driving stack. Also CARLA has mentioned Roadrunner as a go to tool for creating scenes for their simulated environment. So that was around a scene creation. Now once a scene is created, the next important thing is you should be able to simulate the scenario. So there is something called the Roadrunner scenario, which will directly adjust the scenes from RoadRunner.

    And you can place different agents, which are vehicles inside the scene and also start to create the trajectories for individual vehicle, which can be event-based system. And once you have defined the events for all the individual vehicles inside the scenario, and you are happy with the scenario that you have created, you can quickly go ahead and you can start to simulate the scenario. You can define all the velocity profiles, the speed profiles, waypoints for all the different vehicles that you have mentioned inside the scenario.

    And you can simulate the scenario and see how the scenario behaves. And you can take these scenarios for testing or for automated driving applications. Inside Roadrunner scenario it automatically takes care, if you have imported a map information from any map services and you have placed a vehicle, like lots of vehicle, then it will automatically take care while simulating for individual vehicle to follow the map-aware paths.

    You can also control the speed actions of individual vehicle As your own in an event-based manner. You can also create lane change actions for individual vehicles. Also you can create some uneven scenarios, which may see in the traffic or in the road scenarios. So you can create these events. You can create these scenarios and the individual iterations, or at individual time steps, how individual vehicles should behave.

    The RoadRunner scenario provides a direct integration with MATLAB Simulink, also with gRPC client, you can have access. You can also co-simulate with a lot of other third party simulators. You can also export the scenarios created in RoadRunner into many third party simulators, or open standards, which are open scenarios.

    So once you have created the scenario inside RoadRunner scenario, you can export it into open scenario 1.x, or open scenario 2.x format. And MATWORKS is also an ASAM member, who actively participates in the open scenario 2.0 implementation forum. So as these standards are being developed, MATWORKS is there understanding those and also helping bring them to the tools as early as possible.

    Now you may want to create these 3D realistic scenes and scenarios when you are in the very advanced phases of your development, or in particular, when you are testing your perception algorithms. Because for algorithms, like controls, sensor fusion, path planning, you may not need to go into these three realistic scenarios and you can test these in the cuboid. So that's where I was saying that you can choose the fidelity at which you may want to test out your algorithms.

    It could be Cuboid as well as it could be 3D realistic environment. Now similar to the RoadRunner scenario, MATLAB also has a driving scenario Designer APA, which can allow you to create a Cuboid scenario. And here, you can see, I'm quickly creating a road network, adding waypoints to my road network, creating flexible roads. And then also start to add lanes to my road network and defining what those individual type of lanes could be and what is the individual lane width, and all the information that is required for my scenario or the road that I'm creating.

    Now once I have created the road network, I can also create junctions if I want to have. And I can also bring different kinds of vehicles. So here, I'm adding the vehicle and also defining the trajectory for the ego-vehicle along the path. I can bring a lot of different vehicles if I want, so there was an ego-vehicle, now I'm adding an actor inside the scenario and also defining the trajectory for it.

    Once I have defined the trajectory for these vehicles, I can quickly go ahead and simulate and see how they behave once they are simulated. Not only that, I can also quickly open the scenario canvas and I can start to add sensors, like camera, LiDAR, radar, onto my ego-vehicle with the help of the guidance, which is provided by the app, or you can choose to place your camera, LiDAR and radar sensor anywhere.

    And on the right, you can visualize the detections with the help of the probabilistic sensors and see how you are getting the detections. Whatever you have in the driving scenario designer API, you can export it to the script and do the modifications to the scenarios as you want, or you can also directly export it to the SIM link and use it for testing in an integrated system. Similar to the RoadRunner API, even the driving Scenario Designer API allows you to import the map information from OpenStreetMaps, Zenrin, Here Live SD maps.

    You can bring in this map information and start to create your large scenes, or scenarios. You can also export these scene scenarios that you have created into open drive or open scenario format, which can then be ingested by any third party simulators. So whatever you are building here inside RoadRunner or Driving Scenario Designer API can be considered as a reference truth.

    It can be used to co-simulate inside, co-simulate and test your ADAS algorithms inside MATLAB and Simlink, or if you want to export these scenarios and co-simulate with third party simulators, that is also very much possible. There are a lot of pre-built scenarios which are already available, like there are turns, u-turns, a lot of Euro NCAP scenarios, which are already made available so that if you are doing the development of ADAS features for European regions, you already have something to get started with.

    And these are NCAP scenarios, which are pre-built and made available inside the driving Scenario Designer API are, as per the standards, defined by Euro NCAP. There are also a lot of scenes that are already available inside RoadRunner that you can use to replicate and create scenarios and test your automated driving features. Now here is a very beautiful example, where in General Motors are creating synthesized scenarios from the recorded vehicle data.

    So they have multiple sensors mounted on a vehicle and they have driven the vehicle on roads and collected the data. They brought that collected data offline inside a lab and recreated an environment in the simulation. So ideally, if you speak bringing the real world into simulation. And this exercise they have done with MATWORKS for their lane centering system development and testing.

    Now the next important thing that I want to talk about is how can I develop algorithms? Now let me first start with the perception algorithm. And when I'm talking about perception algorithm development, the first thing which is important is the data acquisition. Second is the pre-processing of the data. The third is detection and semantic segmentation. And the third is, finally, you need to understand how the environment looks like.

    And that means where these different vehicles are, where the obstacles are, where dynamic subjects are. So now if I go inside the-- little bit try to understand the challenges of the perception algorithms, those could be first. Once you have collected the data, visualizing that data becomes challenging because you could be working on multiple sensors, could be LiDAR, radar, camera, bringing all these sensors in one platform and doing the visualization, pre-processing, synchronization is a challenge.

    Data labeling. There is a lot of data that you are collecting, thousands and thousands of miles of runs that you are doing, iterations that you are doing. And while you are building your perception algorithms, then the labeling of these data becomes very important. So how can you do some automation there? Labelling is one of the big challenge when it comes to perception algorithms.

    If you are working on deep learning application, then architecting of deep neural network architecture or hyper-parameter tuning of the deep neural network architectures becomes important. And finally, once you have developed your algorithm, whether it is perception or any other algorithms, embedded, deployment and validation of these algorithms becomes very important. Now let's get started with how MATLAB can help you in data acquisition, as well as data processing. MATLAB has a lot of interfaces, which allows you to collect the data while you are collecting the data, or the canvases, or the ROS bags, or in the form of LiDAR sensors.

    If you are working on the Velodyne LiDAR, then you have the capability to directly stream those Velodyne's LiDAR sensors information inside MATLAB. So there are a lot of interfaces available in order to streamingly acquire the data. When it comes to algorithm development, whether you are doing the camera calibration, which is one of the most important thing while working on the camera-based perception algorithms, or the coordinate transformation from vehicle to word, word to vehicle, and many, many different coordinate system, which may exist inside the sensors, as well as the word coordinates.

    Whether you are working a single camera, studio camera, there are a lot of algorithms available to help you get quickly started. Similarly, when it comes to the conventional algorithms development in LiDAR, whether it is LiDAR camera calibration, doing 3D registration of the point cloud, or ground plane detection from the organized point cloud data, or even unorganized point cloud data. All these things are available as reference examples, which you can quickly go ahead and use and stack and workflows.

    Similarly, when it comes to radar, whether you are working on these FMCW and MFSK radars while working on adaptive cruise control technology, or you are working on radar signal simulation and processing for automated driving. There are reference examples available for you to get quickly started. I won't go into details into the conventional algorithms, but while you are working on perceptions as a deep learning task, then the algorithms that you may want to build could be classification, regression, or it could be object detections, or it could be semantic segmentation applications.

    When it comes to doing deep learning, irrespective of what kind of data you are working on, MATLAB has end to end support for deep learning applications. Starting from image, signals, point cloud, numeric data. If I bucket, these deep learning workflow into three buckets. First could be data preparation, wherein ground truth labeling, simulation data generation, or data preprocessing comes. The second, wherein, architecting the models or hyper-parameter tuning those are the last. Wherein the embedded deployment or whole generation and deployment embedded devices, or edge or enterprise platforms.

    MATWORKS has a lot of capabilities to support you in all of these three different verticals of AI. On the data labeling side, MATLAB has a lot of labeling apps, which can help you to automate the labeling process. If you're working on video data, image data, there is image and VideoLabeler app.

    If you are working on synchronous, LiDAR and camera data, then there is a GroundTruthLabeler app, which can allow you to bring in the data and do the visualization, as well as automated labeling of that. While you are doing the model development, designing of the neural network architecture, validating and training the model, as well as experimenting and doing hyper-parameter tuning, is something that is available as an app-based workflow so that you don't invest your time going into the nitty-gritty's of deep learning, but at the high level, step back and develop your render into a stack. While developing these AI algorithms, one of the biggest challenges that we face is how to identify the hyper-parameter values which are robust enough for my application?

    How do we compare the results of different data sets? And also for one particular application, I may want to test out 10 different network architectures. How to do that integratively. Is there a defined and refined process for it? So there are a couple of apps which help you to design your deep neural network architectures, as well as the hyper-parameter too. So there is something called a deep Network Designer, which allows you to choose from the list of pre-trained models which are already available.

    You can bring in these pre-trained models, or you can start to architect your own model inside this canvas, graphically. There are a lot of layers that you can see on the left panel that you can use to quickly start stitching your network together. Also if you are an expert and you feel that you may get enough robustness in one, two, three, four iterations, then this particular app also allows you to train your deep neural networks from the API, itself.

    You can also generate the MATLAB code and go ahead and recreate these MATLAB code in order to test or modify these architectures or data and do the rigorous testing. However, you can also export the experiments or you can also export these algorithms, or models that you have created in Deep Network Designer directly to Experiment Manager, which is an ideal tool for hyper-parameter tuning. So you can define a lot of different hyper-parameters that you wanted to try out.

    Those hyper-parameters could be data sets, network architecture, learning rate, momentum, whatever, like different kind of servers, anything. And you can iterate over these parameters in the exhaustive sweep fashion, or using the Bayesian optimization. You can also monitor the training progress plot. And you can also share it across with your different teams, as well as export the trade network if you are already satisfied with the robustness and accuracy of it.

    So with these apps, MATLAB can quickly help you to stitch entire workflows together and quickly build your applications. Also if you are working on different frameworks, like TensorFlow, Keras or Caffe, MATLAB has the capability to directly import those models into MATLAB and help you with a lot of different workflows like code generation data, labeling and others. There is one example wherein Autoliv have done the LiDAR data labeling inside MATLAB for the sole purpose of verification of the radar data or radar sensors. And they have significantly reduced their data labeling tasks by this method.

    Now I'll pass to Sumit to talk about the sensor fusion piece. Yes Sumit we can see your screen. However, can't hear you, Sumit, yet. Hello, Sumit?

    Hello? Yes, can hear you, please go.

    OK Hello, everyone. So as more and more auto driving problems arise, the challenges become more and more difficult. The automobile industry is taking sensor fusion as the best choice to cope with increasing complexity and reliability of automated driving vehicle. It lays the foundation for how to manage and make use of other data from multiple devices inside the vehicle.

    Therefore, sensor fusion has become the focus of people attention, which integrates a variety of complementary sensing methods. There are a lot of challenges around sensor fusion development. It could be identifying which set of tracker I should choose for my scenario, or which set of motion models should I choose for my scenario? How to fuse the data between multiple sensors? How to perform the extended object fusion for high resolution sensors, or how should I perform metric level validation for my algorithm, which I'm working upon?

    So there are two aspects of fusion that I will cover. First is related to situational awareness of autonomous systems, especially its both orientation and estimation. The second is focused on situational awareness. That is what's happening around the system. You can see that the types of sensors that are used in these kind of systems, data, sonar, IR, LiDAR, looking outward from the platform. Within the platform, we see things like IMU, GPS, altimeter and sensors like this.

    It's an important area for us. It brings us two domains that MATLAB is strong in. Signal and image processing on one side, and controls on the other. The sensor fusion and tracking helps to bridge these two domains. Now let's take a step into how and what type of capabilities are available to develop and design autonomous algorithms.

    And in this, I will break it up into perception, sensor fusion, planning and controls. And I'll talk about the sensor fusion piece. Localization is an important critical component in autonomous systems. Platform need to maintain situational awareness, use localization algorithms. They do with a range of sensor modalities, including camera, LiDAR, sonar.

    Localization is also referred as Pose estimation. Pose is built from position and orientation using one or multiple sensors. In case of multiple sensors, it is often called multi-sensor Pose estimation. Or we call it as localization with sensor fusion. Each of these sensors are typically mounted in different locations on the autonomous system, while measurements from these types of sensors are combined, the fusion algorithms need the Pose information of the autonomous systems to make sense of each of the sensor measurements.

    So we saw that determining goals involves processing that data from multiple sensors, which could be accelerometer, gyroscope, magnetometer, and GPS. Now you may or may not have the sensor data available with you or developing inertial navigation algorithms. So there are different ways to bring that data in MATLAB to get you started, you have options here. You might have your own localization data.

    You can bring this data into MATLAB, either as a recorded data or stream data. If you don't have access to data, that's no problem. There are sensor models that exist in MATLAB and Simulink to cover the type of sensors. You can generate data directly from our sensor models to test your localization using algorithms. Even when you have your own data, it's easy to augment your data sets with the synthesized data.

    And the key to synthesize data is how close it matches the data you collect from hardware. And you have many options to configure the sensor models to match off-the-shelf hardware. So there are a lot of examples which are available as a part of Toolbox. You can see that you can use inertial sensor fusion algorithms to estimate orientation and position over time. The algorithms are optimized for different sensor configurations, output requirements and motion constraints.

    You can directly use IMU data from multiple inertial sensors in Simulink as well. You can also fuse IMU data with the GPS data. Next, an autonomous system also needs to make decisions based on its surroundings. These systems could often use SLAM-- that is simultaneous, localization and mapping-- to generate a map of the environment in which the system will operate.

    This could be inside of a building, an urban neighborhood or even an unexplored planet. Along SLAM, the autonomous system has to navigate using path planning techniques. Now for SLAM as well, the autonomous systems need to understand its pose at all times to make an accurate map and successfully move from point A to point B without colliding with another object.

    So this is one small example, which shows how to generate SLAM-based approach. And there are a lot of other examples for multiple sensors which are available as a part of Toolbox, which you could refer upon. It could be a molecular camera, it could be a LiDAR with real data approach in which you build a map from LiDAR data, or it could be a LiDAR which uses synthetic data in which we design LiDAR SLAM algorithm using 3D simulation environment.

    Now the next part, which I'll be talking about is the tracking. So in this, you see now the tracking algorithm development flow is shown here. As with localization workflow, the input to the algorithm can be made through recording data, live data or data generated with the sensor models. You provide a library of configurable algorithms. And this is for multi-object tracking and localization.

    These algorithms connect to our visualization and metrics. You will see that on all the examples, you will find how these are used. We have an open interface in terms of object detection format. It's easy to map your own detections and object detection formats. The trackers include capabilities to establish and maintain large number of tracks.

    We make no assumptions on how you define parameters to track or how they measure. You took a two design point approach in this area. The first design point is focused on being able to use trackers out of the box. And the second is how further you can define the components, such as tracking filters, motion models, or data association algorithms. So we talked a bit about detection format, which is this.

    And we also have a standard definition of track as well. Let's focus on that. So again, on the left, we see detection format, which is there as an open standard that you can map your detections to, or any other detections we generate from our sensor models, are automatically generated in that format. Then we have the track-standard format as well.

    Now the key thing that I want to highlight here is that multi-object tracker is more than just a filter. Certainly, the filter is an important portion of what any multi-object tracker includes, but we also provide, in addition to filters, library of motion model, data association algorithms to pick from. We really are focused on covering on full lifecycle of the track from multi object tracker.

    So we also have a range of trackers to pick from. You can use trackers out of the box or you can customize your own trackers with the library of these tracking filters, motion models and assignment algorithms. So we started to give you more choices, but as you have all these different choices, how you can start making decisions, whether one design is better than the other design. So along with providing these type of trackers and filters, we are also providing metric calculations to help you assess the performance of your data. And using all these algorithms, you can design, simulate and test tracking algorithms for a range of sensor modalities, including sensor combinations.

    Many of the examples I show here use an automated driving scenario, but as I noted in the beginning, the concepts apply more to autonomous systems. In the interest of the time, I walk you through one of the examples. As we discussed earlier, you can generate scenarios, including actors around an ego-vehicle. You can generate these scenes with a focus on corner cases that will ensure safety requirements are achieved. This data can be used to augment the scenarios you would otherwise collect in the field.

    You can also add sensor models to any ego-vehicle. Here we used a LiDAR point cloud generator to go along with our data and camera detection generators. In this example, we will use the LiDAR AND radar models to create detections which will then be used to develop and test tracking and fusion algorithms. In our scenario, we start with a LiDAR sensor mounted on an ego-vehicle with 360 degree of coverage.

    On the left panel, you see the view from ego-vehicle perspective, looking in the forward direction and to the rear of the ego-vehicle. On the top of the right panel, the bird's eye view is shown The green dots represent the raw point cloud detections. Our first step after we have scenario running is to generate LiDAR detections from the raw point cloud. This involves segmentation to remove the ground plane.

    We then fit the resulting point cloud as 3D bounding boxes, which is used as an input to multi object tracker. The 3D bounding box is now focused on what we feel to join probabilistic data association multi object tracker. Note here, the track is maintained throughout the scenario. We use an IMM tracking feature to improve the tracker's ability to handle maneuvering vehicles that change lanes.

    To do this, let's get back to our radar track. A 2D point cloud is formed from the radar sensor. In this view, the blue dot represents radar detection. You can also see more than one detection from each object in the radar field of view. With the multi object tracker, you can cluster these multiple detections into single detection and use that as an input to tracker.

    This often results in false tracks, as well as inconsistent track positions that vary with respect to the angle of the radar. In this example, we use extended object trackers which help you to estimate size and orientation. On the left side of the plot, note that the ground route centered around blue ego-vehicle and the zoomed in view of one of the vehicle in the radar coverage.

    The number of detections varies based on the aspect angle. The track for this type of system is represented by a rectangular shape around each object. In the visualization, the dotted rectangle is the track, which track is shown around the ground truth cuboid. You can see how closely these match.

    You can also see how tracking detections that are not clustered to single point detections can be used to estimate size and orientation. Now let's look at when we fuse the tracks from each of the sensor modalities together. In general, LiDAR is a high resolution sensor than radar, so we expect results from a LiDAR track to be better than radar.

    But if you recall, we saw that from LiDAR track, when the vehicles were next to each other, the LiDAR tracks migrated together. This is not desirable for obvious reasons. The reason is, it happens because the LiDAR tracker does not have enough information to resolve the two tracks. The radar sees this case with a different lens.

    The radar track includes velocity component as one of the factors in overtaking the other. The radar performance during this part of scenario is much closer to our ground truth. And this is how we can fuse the LiDAR and radar tracks together. And finally, we can access the metrics-- evaluation metrics-- of the missed target and false targets, that how our algorithms are performing in terms of GOSPA metrics, which show an integrated view of fusion parameters versus ground truth of the scenario.

    And this is one example that I wanted to talk about. And I want to highlight one story here, one example in which Scania develops an advanced emergency braking system to develop sensor fusion algorithms and simulate and verify designs and generate their code for implementation of production ECU. With this, I'll hand over to Rishu to talk about planning and controls.

    Thanks, Sumit, for sharing insights on the sensor vision piece. Let me know once his team is available.

    We can see. Awesome, thanks Sumit. So until now, we have talked about how we can synthesize scenarios. We also talked about development of perception algorithms. We touched upon some of the concepts of deep learning, conventional perception algorithms, as well as we talked about sensor fusion. Now let's go ahead and look at the brief overview about how we can go ahead and develop planning and controls algorithms inside MATLAB.

    So when it comes to the development of planning algorithms, there could be a lot of different kind of planning algorithms you could be working on. So MATLAB supports a lot of different planners, RRT, RRT Star and many others while you are working on problems of global planning, your path from point A to point B, behavioral planning-- which is a high level decision making-- or while you are doing the navigation in the dynamic environment and requires local planning. On top of that, for controller development for vehicles, whether you are working on the lateral vehicle control development or longitudinal vehicle control development, all of those things are very much available.

    And MATLAB has made it all very simple in order for you to adopt and continuously make progress on your particular applications. Some of the common challenges which we face while working on the planning and controls algorithm is, in general, when we are working on these problems, it is why with the help of very simplistic state machines or PID controllers. So how we can go beyond that? How you can actually design linear model predictive controls or nonlinear MVC controllers, which can be more complex and give you higher accuracy.

    How you can go further advancement training the reinforcement learning agents. How you can choose appropriate planning algorithms, like I mentioned, a lot of different algorithms are available, whether you are working global planning, local planning, or behavioral planning. So how to choose some of these algorithms for your task. And also tuning of these algorithms, or hyperparameter tuning of these algorithms in order to fit your specific requirements.

    All this becomes a challenge. So now, first thing, let's take another example of Highway lane change and start to think from that. Now here, the first thing that we will need to do is we need to synthesize the scenario. Now we have seen that there are a lot of ways in which we can synthesize scenario in MATLAB. One such way is by the cuboid world where I can directly import the scenarios which I have created in the driving Scenario Designer API and bring it inside any of that test ventures. And there are a lot of test ventures we have seen. Highway lane following, and now we are looking at highway lane change.

    The next thing is, while you are designing the planner inside MATLAB, there are a lot of different planners that are available. And in general, the planners are separated into two components. A simple behavior layer that says the terminal states for the candidates. And then there is a motion planner layer that can generate collision-free trajectory conform to road shape without violating any constraints.

    And in the end, planner is creating a virtual lane center for the controller to follow. So when it comes to modeling the dynamics of the vehicle, although there are a lot of reference examples which are available, and with many of these examples, there is a bicycle model, which is 3 degrees of freedom simplistic vehicle dynamics model, which is available. However, if you wanted to incorporate a very high fidelity vehicle dynamics model, like, 16 degrees of freedom vehicle dynamics model, you can parameterized and do that as well.

    While you are doing the entire algorithm development or the simulation of the test mentioned in a scenario, you can perceive the scenario in the bird's eye view. You can look at the different trajectories which are generated. The trajectories which your vehicle is following, the candidate trajectories which are provided by the different planners. And then which trajectory based on the controller logic egopvehicle build plan.

    So all of these visualizations will help you to understand whether your application or the code that you have developed, or algorithms you have implemented are working appropriately or not. And this is just one such scenario. There could be thousands of scenarios where you may want to test out these applications. And there is a way in which you can bring all of those thousand scenarios and automate your entire testing stack.

    Like I mentioned, we talked about couple of reference examples like highway lane following, highway lane change. However, depending upon what application you are working on, there would be a reference example available for you to get quickly adopted to. For example, if you are working on adaptive cruise control, or lane keep assist, or automated parking valet as a feature, then there are reference examples available inside automated Driving Toolbox, which will give you a quicker start.

    And many times, the controller is available in this particular test benches are itself high fidelity, which if you want, you can go out and do the code generation and see how they behave on your controller, or the target. Are there other components, like if you want to train your reinforcement learning agents, if you want to train a reinforcement learning agent for different kinds of applications, whether it is for lane-keep assist, or automated parking valet, or adaptive cruise control, again, there are reference examples available for you to get quickly started.

    And not only reinforcement learning agents, if you are working on non-linear NPCs as well, there are a lot of reference examples available, which will help you to get quickly started. So here Mobileye actually leverages MATLABS control algorithm development and deployment of the real time hardware. And here, the deployment that they wanted to do was on their IQ chip for the testing of their controls algorithms.

    Now before going any further, we have covered a lot of ground until now. I want to ask a poll question from you. What are your current areas of development while working on automated driving? You will see a poll window popping up on your screen right now. The options that you will see on the poll are, environment modeling, which has scene or scenario creation.

    Conceptual algorithm development, sensor fusion algorithms development, or are you working on path planning and controller logics? Or you are working on open loop or closed loop testing, or virtual validation of the subcomponents. Subcomponents could be either perception path planning, sensor fusion controls, or any other. Or you are doing closed loop testing or virtual validation of your ADAS features, which could be AEB, ACC, highway lane following, highway lane controller.

    Or are you invested in automatic code generation and software-in-loop testing of subcomponents or the features. Or right now, this is more of a learning for you. And you are currently not doing active development in any of these workflows. Now while you answer the poll questions, I'll continue to progress ahead.

    But I would request you to please go ahead and look at the poll question and get this done so that helps us to understand what you would be more interested in, and we can go-- in the future topics-- we can go deeper into one of these subtopics, as well. Now the last thing that I wanted to talk about is the code generation and testing and implementation.

    Now some of the common challenges while we wanted to test our algorithms on the hardware, are there are a lot of different hardware, there are a lot of different subcomponents. How to do unit testing, or how to do integrated testing. How to keep the different algorithmic changes that we are doing and the different implementation of other subcomponents in synchronization.

    How can they quickly develop an algorithm and also test an algorithm by generating the code and deploying it onto the embedded hardware? Or while doing the subcomponent testing, can I also take it and do closed loop testing with that test image? Testing across different kind of tools. You could be working on deployment on FPGA, GPUs, CPUs, a lot of different hardware that you may want to deploy your applications on.

    Can you just have one reference algorithmic framework, and/or two automatic code generation to deploy and test all these different hardwares? And while you are generating the code for these applications, are you following these standards which are set for coordination and deployment. These standards could be MISRA C compliance, ISO 26262, and many others.

    Now with the help of MATLAB and Simulink and the different code of products that MATLAB has to offer, we can actually generate the code, which can be deployed to different embedded targets. And these embedded targets could be CPUs, GPUs, as well as FPGAs. Or you can also deploy your algorithms in the form of a roster, which can then be integrated to your ROS network.

    So irrespective of whatever target you are working on, there could be a way in which MATLAB can help you generate CC++ code, or target a specific CC++ code with a target specific intrinsic and target specific library. Or we can help you in generating CUDA code, if you are trying to leverage it with GPUs. If you're trying to generate code for FPGA, ASICs, or program sources, we can help you by generating VHDL as well

    There is an example wherein we are doing the development of the host machine. And once the code is generated, we can automatically take this generated code and deploy it onto the embedded target, which is then PDP. And the process could be very simple. We can just choose the executable, define find the target hardware, and perform the settings as part of our requirement. Like what is the hardware? It's username and password.

    What is the builder directory? What are the different libraries that I may want to target? Do I need to include any sub-libraries or main files in my generated code? And once I'm done with all of these settings, I can quickly close and generate the code. Now once the code is generated, you can do full analysis of the generated code as to how many variables are created.

    You can look through the entire record, looking at where does the memory and location happen? And how you are cleaning them memory. And once all is done, you can simply copy all these files onto the embedded target and run the executable on the target itself.

    So during the application development, code generation and deployment on embedded target from inside MATLAB can be super easy and super flexible. And from the same reference application, you can generate the code for ARM CPU, as well as NVIDIA GPUs. Or if you wanted to deploy it to an FPGAs also, that can be facilitated.

    This is the application running on the Jetson Xavier code. Also you can generate a CC++ code, irrespective of what applications you are working on, whether it is perception algorithms or perception algorithms working on camera, LiDAR or sensor fusion algorithms, you can generate CC++ code, which can be deployed. And similar to any workflow, there are a lot of reference examples available to help you get started.

    The first example that I'm showcasing here does the code generation for deep learning applications. Code generation for lane and vehicle detection, which are two different deep learning models for the NVIDIA GPU. The second is doing the code generation for sensor fusion algorithms, in CC++. And third is doing the code generation and wrapping the generated into a roster, which can be deployed on the ROS network.

    Something on the similar lines, Hitachi has developed its MVC controller and also code generation and deployment of the generated code on the embedded target. For this purpose, they have used the Embedded Coder in order to generate the target specific code, which is further optimized as compared to the generic CC++ code. You can also generate synthesizable RTL code that can be deployed onto FPGA, ASICs, or SoCs devices.

    These are the readable BSDL code, which can be modified or integrated to the systems, or subsystem that you are working on. You can also generate these single precision code to fix point precision using some of the automated guidance feature provided by the fix point Designer. The generated code can also follow the compliance safety standards like, ISO 26262, which is required by the automotive authorities.

    Again, whether you are doing the deployment on CPU, GPU or FPGA, there are reference examples available that will guide you in order to set up your entire workflow and do the code generation and deployment. So the bottom line could be while you are doing the algorithm development in MATLAB for your entire end to end stack, you can also do the code generation for individual component, sub-component and deploy it onto the target and do processor in loop for verification of that component.

    Or you can generate a code for the entire testbench and do the software in-loop testing of the entire testbench inside MATLAB simulink environment itself Here Value implements that point cloud processing algorithms for automotive LiDAR on the FPGA. And once the algorithm was developed, code was generated, they ran it on their HiL setup directly and compare it with the Simulink implementation of their entire application.

    While you are working with, or doing the development inside MATLAB/Simulink, MATLAB/Simulink supports co-simulation with a lot of different simulators coming from cadence, micro semi, Xilinx, Intel and others. You can test out your applications and processor in-loop, FPGA in-loop, software in-loop, many different ways while you are doing the testing with the actual hardware as well. You can create a lot of test benches, models, UVM testbench, and do the testing offer end to end pipeline or the sub-components that are in development for ADAS.

    To summarize the discussion, I would like to bring back all the things that we have talked. Into your V cycle you could be doing the testing at different stages. You could be doing model in-loop testing for the subsystem, or a sub-component. You could be doing software in-loop testing, hardware in-loop testing for a component or a sub-component. Or you could be doing integrated testing.

    You can shift left and do Early verification in your development and design process, or individual components and be assured and save a lot of development time while doing the development of an entire ADAS feature. This is the last success story that I want to share with you, where NXP has verified that automotive radar semiconductors in the development phase itself on the virtual test fields with the help of the fields generated using the MATLAB.

    And they have integrated the entire MATLAB functions into the cadence simulation environment for verification. So if you are doing any of these kind of developments or simulations and testing, we would be happy to talk on any of these topics and learn more and work more with you. With that I like to push my second poll questions, which is what topics from today's discussion are of interest to you? And would you be interested in talking to MathWorks technical expert on any of these?

    The options that follow are, environment modeling, scene or scenario creation, ADAS algorithm development, open loop and closed loop testing, or virtual validation of ADAS subcomponents. Closed loop virtual validation of ADAS features. Automatic code generation, software in-loop, hardware in-loop, FPGA in-loop testing or none of the above.

    Please help us by answering the poll questions. It will help us to connect with you if you are interested to learn more and discuss more on your problem statements. There are a lot of training's available if you are working on automotive applications. There are a lot of free courses available on the left. You can see a lot of these training courses which are available for free. These are quick Onramp courses. These are very detailed training courses which are available.

    And on the right, you can see a lot of webinars which are available on different topics. So if you are interested to learn more, there are a lot of topics, there are a lot of resources available. And also, we would be happy to learn more and work with you. Thank you very much for paying attention. I'll stop sharing my screen. We'll try to go into the Q&A panel and see if there are questions.

    Related Products