Main Content

Automate Ground Truth Labeling for Vehicle Detection Using PointPillars

This example shows how to automate vehicle detections in a point cloud using a pretrained pointPillarsObjectDetector in the Lidar Labeler. The example uses the AutomationAlgorithm interface in the Lidar Labeler app to automate labeling.

Lidar Labeler App

Good ground truth data is crucial for the developing automated driving algorithms and evaluating performance. However, creating and maintaining a diverse, high-quality, and labeled data set requires significant effort. The Lidar Labeler app provides a framework to automate the labeling process using the AutomationAlgorithm interface. You can create a custom algorithm and use it in the app to label your entire data set. You can also edit the results to accommodate any challenging scenarios missed by the algorithm.

In this example, you:

  1. Use a pretrained pointPillarsObjectDetector to detect objects of class 'vehicle'.

  2. Create an automation algorithm that you can use in the Lidar Labeler app to automatically label vehicles in the point cloud using the PointPillars object detector.

Detect Vehicles Using PointPillars Object Detector

Detect vehicles in a point cloud using a pretrained PointPillars object detector. For information on how to train a PointPillars network, see Lidar 3-D Object Detection Using PointPillars Deep Learning example. You can improve the network performance by iteratively using custom training data while training the network.

Use the pretrained object detector to detect the vehicles:

  • Read the point cloud.

  • Run the detector on the point cloud to detect bounding boxes.

  • Display the point cloud with bounding boxes.

% Load pretrained detector.
pretrainedDetector = load("pretrainedPointPillarsDetector.mat","detector");
detector = pretrainedDetector.detector;

% Read the point cloud.
ptCloud = pcread("PandasetLidarData.pcd");

% Detect the bounding boxes.
[bboxes,~,~] = detect(detector,ptCloud);

% Display the detections on the point cloud.
figure
ax = pcshow(ptCloud.Location);
showShape("cuboid",bboxes,"Parent",ax,"Opacity",0.1,"Color","green","LineWidth",0.5)
hold on
zoom(ax,2.5)
title("Detected vehicles on Point Cloud")

Define Lidar Vehicle Detector Algorithm in Lidar Labeler

Download the point cloud sequence (PCD). For illustration purposes, this example uses PandaSet data set from Hesai and Scale [1]. PandaSet contains point cloud scans of the various city scenes captured using the Pandar 64 sensor. Execute the following code block to download and save the lidar data in a temporary folder. Depending on your internet connection, the download process can take some time. The code suspends MATLAB® execution until the download process is complete. Alternatively, you can download the data set to your local disk using your web browser and extract the file.

Download the point cloud sequence to a temporary location.

   outputFolder = fullfile(tempdir,'Pandaset');
    lidarDataTarFile = fullfile(outputFolder,'Pandaset_LidarData.tar.gz');

    if ~exist(lidarDataTarFile,'file')  
        mkdir(outputFolder);     
        disp('Downloading Pandaset Lidar driving data (5.2BG)...');
        component = 'lidar';
        filename = 'data/Pandaset_LidarData.tar.gz';
        lidarDataTarFile = matlab.internal.examples.downloadSupportFile(component,filename);       
        untar(lidarDataTarFile,outputFolder);
    end
    
    % Check if tar.gz file is downloaded, but not uncompressed.
    if ~exist(fullfile(outputFolder,'Lidar'),'file')
        untar(lidarDataTarFile,outputFolder);
    end

Open the Lidar Labeler app and load the point cloud sequence.

    pointCloudDir = fullfile(outputFolder,'Lidar');
    lidarLabeler(pointCloudDir);

App displays the point cloud data and the time range as the image below.

AutomaticVehicleDetectionDisplaySignal.png

In the ROI Labels tab in the left pane, click Label. Define an ROI label with the name Vehicle and the Cuboid. Optionally, you can select a color. Click OK.

This example runs the algorithm on a subset of the Pandaset point cloud frames. Specify the time interval from 0 to 15 seconds for which the app runs the algorithm. Specify 15 in the End Time box. The range slider and text boxes are set from 0 to 15 seconds. The app displays and applies the automation algorithm only on the frames in this interval.

AutomateVehicleDetectionFrameRange.png

On the LABEL tab of the app toolstrip, in the Automate Labeling section, click Select Algorithm > Add Algorithm > Create New Algorithm. The app opens a lidar.labeler.AutomationAlgorithm class that enables you to define a custom automation algorithm. You can also use this class to define a user-interface within the app to run the algorithm. For more information, see Create Automation Algorithm for Labeling.

Now, define an automation class for the lidar vehicle detector algorithm. The class inherits from the lidar.labeler.AutomationAlgorithm abstract base class. The base class defines properties and signatures for methods that the app uses to configure and run the custom algorithm. The LidarVehicleDetector class is based on this template and provides you with a ready-to-use automation class for vehicle detection in a point cloud. The comments of the class outline the basic steps needed to implement each API call.

Algorithm Properties

Step 1 contains properties that define the name and description of the algorithm and the directions for using the algorithm.

    % ----------------------------------------------------------------------
    % Step 1: Define the required properties describing the algorithm. This
    % includes Name, Description, and UserDirections.
    properties(Constant)
        
        % Name Algorithm Name
        %   Character vector specifying the name of the algorithm.
        Name = 'Lidar Vehicle Detector';
        
        % Description Algorithm Description
        %   Character vector specifying the short description of the algorithm.
        Description = 'Detect vehicles in point cloud using the pretrained PointPillars object detector.';
        
        % UserDirections Algorithm Usage Directions
        %   Cell array of character vectors specifying directions for
        %   algorithm users to follow to use the algorithm.
        UserDirections = {['ROI Label Definition Selection: select one of ' ...
            'the ROI definitions to be labeled'], ...
            ['Run: Press RUN to run the automation algorithm. '], ...
            ['Review and Modify: Review automated labels over the interval ', ...
            'using playback controls. Modify/delete/add ROIs that were not ' ...
            'satisfactorily automated at this stage. If the results are ' ...
            'satisfactory, click Accept to accept the automated labels.'], ...
            ['Change Settings and Rerun: If automated results are not ' ...
            'satisfactory, you can try to re-run the algorithm with ' ...
            'different settings. To do so, click Undo Run to undo ' ...
            'current automation run, click Settings, make changes to Settings,' ...
            'and press Run again.'], ...
            ['Accept/Cancel: If the results of automation are satisfactory, ' ...
            'click Accept to accept all automated labels and return to ' ...
            'manual labeling. If the results of automation are not ' ...
            'satisfactory, click Cancel to return to manual labeling ' ...
            'without saving the automated labels.']};
    end

Custom Properties

Step 2 contains the custom properties needed for the core algorithm.

    % ---------------------------------------------------------------------
    % Step 2: Define properties you want to use during the algorithm
    % execution.
    properties
        
        % SelectedLabelName 
        %   Name of the selected label. Vehicles detected by the algorithm 
        %   are assigned this variable name.
        SelectedLabelName
        
        % PretrainedDetector
        %   PretrainedDetector saves the pretrained PointPillars object 
        %   detector.
        PretrainedDetector
        
        % ConfidenceThreshold
        %  Specify the confidence threshold to use only detections with 
        %  confidence scores above this value.
        ConfidenceThreshold = 0.45;           
        
    end

Function Definitions

Step 3 deals with function definitions.

The checkSignalType function checks if the signal data is supported for automation. The lidar vehicle detector supports signals of the type PointCloud.

        function isValid = checkSignalType(signalType)            
            % Only point cloud signal data is valid for the Lidar Vehicle
            % detector algorithm.
            isValid = (signalType == vision.labeler.loading.SignalType.PointCloud);           
        end

The checkLabelDefinition function checks if the label definition is the appropriate type for automation. The lidar vehicle detector requires the Cuboid label type.

        function isValid = checkLabelDefinition(~,labelDef)            
            % Only cuboid ROI label definitions are valid for the Lidar
            % vehicle detector algorithm.
            isValid = labelDef.Type == labelType.Cuboid;
        end

The checkSetup function checks if an ROI label definition is selected for automation.

        function isReady = checkSetup(algObj)            
            % Is there one selected ROI Label definition to automate.
            isReady = ~isempty(algObj.SelectedLabelDefinitions);
        end

The settingsDialog function obtains and modifies the properties defined in Step 2. This API call lets you create a dialog box that opens when you click the Settings icon in the Automate tab. To create this dialog box, use the dialog function to quickly create a simple modal window to optionally modify the confidence threshold. The lidarVehicleDetectorSettings method contains the code for settings and input validation steps.

        function settingsDialog(algObj)
            % Invoke dialog with option for modifying the confidence threshold. 
            lidarVehicleDetectorSettings(algObj)
        end

Execution Functions

Step 4 specifies the execution functions. The initialize function populates the initial algorithm state based on the existing labels in the app. In this example, the initialize function performs the following steps:

  • Store the name of the selected label definition.

  • Load the pretrained pointPillarsObjectDetector and save it to the PretrainedDetector property.

        function initialize(algObj,~)           
            % Store the name of the selected label definition. Use this
            % name to label the detected vehicles.
            algObj.SelectedLabelName = algObj.SelectedLabelDefinitions.Name;
            
            % Load the pretrained pointPillarsObjectDetector.
            pretrainedDetector = load('pretrainedPointPillarsDetector.mat','detector');
            algObj.PretrainedDetector = pretrainedDetector.detector;          
        end

The run function defines the core lidar vehicle detector algorithm of this automation class. The run function is called for each frame of the point cloud sequence, and expects the automation class to return a set of labels. You can extend the algorithm to any category the network is trained on. In this example, the network detects the objects of class 'Vehicle'.

        function autoLabels = run(algObj,ptCloud)           
            bBoxes = [];
            for i = 1:2
                if i == 2
                    % Rotate the point cloud by 180 degrees.
                    theta = 180;
                    trans = [0, 0, 0];
                    tform = rigidtform3d([0 0 theta], trans);
                    ptCloud = pctransform(ptCloud,tform);
                end

                % Detect the bounding boxes using the pretrained detector.
                [box,~,~] = detect(algObj.PretrainedDetector,ptCloud, ...
                    "Threshold",algObj.ConfidenceThreshold);

                if ~isempty(box)
                    if i == 2
                        box(:,1) = -box(:,1);
                        box(:,2) = -box(:,2);
                        box(:,9) = -box(:,9);
                    end
                    bBoxes = [bBoxes;box];
                end
            end

            if ~isempty(bBoxes)
                % Add automated labels at bounding box locations detected
                % by the vehicle detector, of type Cuboid and with the name
                % of the selected label.
                autoLabels.Name     = algObj.SelectedLabelName;
                autoLabels.Type     = labelType.Cuboid;
                autoLabels.Position = bBoxes;
            else
                autoLabels = [];
            end
        end

The terminate function handles any cleanup or tear-down required after the automation is done. This function is invoked after run has been invoked on the last frame in the specified interval or after algorithm stops running. LidarVehicleDetector algorithm does not require resetting any of its parameters, so the function is empty.

Use Lidar Vehicle Detector Automation Class in App

To use the properties and methods implemented in the LidarVehicleDetector automation algorithm class file with Lidar Labeler, you must import the algorithm into the app.

First, create the folder structure +lidar/+labeler under the current folder, and copy the automation class into the path.

Note: The LidarVehicleDetector.m file must be in the same folder where you create the +lidar/+labeler folder structure.

    mkdir('+lidar/+labeler');
    copyfile('LidarVehicleDetector.m','+lidar/+labeler');

Under Select Algorithm, select Refresh list. Then, click on Select Algorithm and select Lidar Vehicle Detector. If you do not see this option, verify that the current working folder has a folder called +lidar/+labeler, with a file named LidarVehicleDetector.m in it.

Click on Automate. The app opens an automation session for the selected signals and displays instructions to use the algorithm.

Click Settings, and in the dialog box that opens, modify the parameters if needed and click OK.

Click on Run. The app runs the algorithm on each frame of the sequence and detects vehicles by using the Vehicle label type. After the app completes the automation run, you can use the slider or arrow keys to scroll through the sequence to visualize the results or segmented labels. Use the zoom, pan, and 3-D rotation options to view and rotate the point cloud. You can manually tweak the results by adjusting the detected bounding boxes or adding new bounding boxes.

When you are satisfied with the detected vehicle bounding boxes for the entire sequence, click Accept. You can then continue to manually adjust labels or export the labeled ground truth to the MATLAB workspace.

You can use the concepts described in this example to create your own custom automation algorithms and extend the functionality of the app.

References

[1] Hesai and Scale. PandaSet. https://scale.com/open-datasets/pandaset