Main Content

detect

Detect lanes in lidar point clouds

Since R2023b

Description

example

ld = detect(detector,ptCloud) detects lanes within a point cloud, ptCloud, using a lidar lane detection network utilizing global feature correlation (LLDN-GFC) lane detector, detector. The function returns the locations of detected lane points, ld, as a set of x-, y-, and z-coordinates.

[ld,labels] = detect(detector,ptCloud) returns a categorical array of the labels assigned to the detected lane points. You define the labels for lane classes during training.

detectionResults = detect(detector,ds) detects lanes within a set of point clouds, ds.

[___] = detect(___,Name=Value) specifies options using one or more name-value arguments in addition to any combination of arguments from previous syntaxes. For example, ExecutionEnvironment="cpu" specifies to use the CPU to detect lanes within an input point cloud.

Note

This functionality requires Deep Learning Toolbox™, Lidar Toolbox™, and the Automated Driving Toolbox™ Model for Lidar Lane Detection support package. You can download and install the Automated Driving Toolbox Model for Lidar Lane Detection from Add-On Explorer. For more information about installing add-ons, see Get and Manage Add-Ons.

Examples

collapse all

Specify the name of the pretrained LLDN-GFC lane detector.

name = "lldn-gfc-klane";

Create a lane detector by using the pretrained LLDN-GFC deep learning network.

detector = lidarLaneDetector(name);

Read a test point cloud and detect lanes in it.

ptCloud = pcread("highway.pcd");
laneDetections = detect(detector,ptCloud);

Display the detection results.

figure
ax = pcshow(ptCloud);
set(ax,XLim=[-80 80],YLim=[-40 40])
zoom(ax,3);
hold on
plot3(laneDetections(:,1),laneDetections(:,2),laneDetections(:,3),'*',MarkerSize=2,Color="r")

Input Arguments

collapse all

LLDN-GFC lane detector, specified as a lidarLaneDetector object.

Input point cloud, specified as a pointCloud object. This object must contain the locations and intensities necessary to render the point cloud.

Collection of point clouds, specified as an array of pointCloud objects, a cell array of pointCloud objects, or a valid datastore object. You must set up a datastore such that the read function of the datastore object returns a pointCloud object. For more information on creating datastore objects, see the datastore function.

Name-Value Arguments

Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

Example: detect(detector,ptCloud,ExecutionEnvironment="cpu") uses a CPU to detect lanes within an input point cloud.

Minimum batch size, specified as a positive integer. Use the MiniBatchSize argument to process a large collection of point clouds. Using this argument, the function groups point clouds into mini-batches and processes them as a batch to improve computational efficiency. Increase the minimum batch size to decrease processing time. Decrease the size to use less memory.

Hardware resource on which to run the detector, specified as "auto", "gpu", or "cpu".

  • "auto" — Use a GPU, if available. Otherwise, use the CPU.

  • "gpu" — Use the GPU. To use a GPU, you must have Parallel Computing Toolbox™ and a CUDA® enabled NVIDIA® GPU. If a suitable GPU is not available, the function returns an error. For information about the supported compute capabilities, see GPU Computing Requirements (Parallel Computing Toolbox).

  • "cpu" — Use the CPU.

Flag to get height information of detected lanes, specified as a logical 1 (true) or 0 (false). When you set this flag to true, the function computes the approximate z-coordinate values of detected lane points.

Output Arguments

collapse all

Locations of the lanes detected within the point cloud, returned as an M-by-3 matrix. M is the number of detected lane points. Each row in the matrix represents the [x y z] coordinates of a lane point.

Labels of the detected lane points, returned as a M-by-1 categorical array. M is the number detected lane points. Define the class names used to label the lanes when you train the lane detector.

Note

The pretrained LLDN-GFC lane detector model can detect a maximum of six lanes, and it assumes that the ego vehicle is placed such that there are three lane boundaries on either the left or right side of it. Thus, the output class labels, lane3 and lane4, represent the left and right lane boundaries of the ego vehicle, respectively.

Detection results, returned as a two-column table with columns named, laneDetections and labels. Each row of the table corresponds to a point cloud from the input point cloud array, point cloud cell array, or datastore. The laneDetections and labels entries for each row of the table contain an M-by-3 matrix of [x y z] lane point coordinates and an M-by-1 categorical array of lane point class labels, respectively. M is the number of detected lane points in the corresponding point cloud.

To evaluate the detection results, use the evaluate function.

References

[1] Paek, Dong-Hee, Seung-Hyun Kong, and Kevin Tirta Wijaya. “K-Lane: Lidar Lane Dataset and Benchmark for Urban Roads and Highways.” In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 4449–58. New Orleans, LA, USA: IEEE, 2022. https://doi.org/10.1109/CVPRW56347.2022.00491.

Version History

Introduced in R2023b