Main Content

Code Generation for Object Detection by Using YOLO v2

This example shows how to generate CUDA® MEX for a you only look once (YOLO) v2 object detector. A YOLO v2 object detection network is composed of two subnetworks. A feature extraction network followed by a detection network. This example generates code for the network trained in the Object Detection Using YOLO v2 Deep Learning example from Computer Vision Toolbox™. For more information, see Object Detection Using YOLO v2 Deep Learning (Computer Vision Toolbox). You can modify this example to generate CUDA® MEX for the network imported in the Import Pretrained ONNX YOLO v2 Object Detector example from Computer Vision Toolbox™. For more information, see Import Pretrained ONNX YOLO v2 Object Detector (Computer Vision Toolbox).

Third-Party Prerequisites

Required

This example generates CUDA MEX and has the following third-party requirements.

  • CUDA® enabled NVIDIA® GPU and compatible driver.

Optional

For non-MEX builds such as static, dynamic libraries or executables, this example has the following additional requirements.

Verify GPU Environment

Use the coder.checkGpuInstall function to verify that the compilers and libraries necessary for running this example are set up correctly.

envCfg = coder.gpuEnvConfig('host');
envCfg.DeepLibTarget = 'cudnn';
envCfg.DeepCodegen = 1;
envCfg.Quiet = 1;
coder.checkGpuInstall(envCfg);

Get Pretrained DAGNetwork

This example uses the yolov2ResNet50VehicleExample MAT-file containing the pretrained network. The file is approximately 98MB in size. Download the file from the MathWorks® website.

matFile = matlab.internal.examples.downloadSupportFile('vision/data','yolov2ResNet50VehicleExample.mat');
vehicleDetector = load(matFile);
net = vehicleDetector.detector.Network
net = 
  DAGNetwork with properties:

         Layers: [150×1 nnet.cnn.layer.Layer]
    Connections: [162×2 table]
     InputNames: {'input_1'}
    OutputNames: {'yolov2OutputLayer'}

The DAG network contains 150 layers including convolution, ReLU, and batch normalization layers and the YOLO v2 transform and YOLO v2 output layers. To display an interactive visualization of the deep learning network architecture, use the analyzeNetwork (Deep Learning Toolbox) function.

analyzeNetwork(net);

The yolov2_detect Entry-Point Function

The yolov2_detect.m entry-point function takes an image input and runs the detector on the image using the deep learning network saved in the yolov2ResNet50VehicleExample.mat file. The function loads the network object from the yolov2ResNet50VehicleExample.mat file into a persistent variable yolov2Obj and reuses the persistent object on subsequent detection calls.

type('yolov2_detect.m')
function outImg = yolov2_detect(in,matFile)

%   Copyright 2018-2021 The MathWorks, Inc.

persistent yolov2Obj;

if isempty(yolov2Obj)
    yolov2Obj = coder.loadDeepLearningNetwork(matFile);
end

% Call to detect method
[bboxes,~,labels] = yolov2Obj.detect(in,'Threshold',0.5);

% Convert categorical labels to cell array of charactor vectors
labels = cellstr(labels);

% Annotate detections in the image.
outImg = insertObjectAnnotation(in,'rectangle',bboxes,labels);

Run MEX Code Generation

To generate CUDA code for the entry-point function, create a GPU code configuration object for a MEX target and set the target language to C++. Use the coder.DeepLearningConfig function to create a CuDNN deep learning configuration object and assign it to the DeepLearningConfig property of the GPU code configuration object. Run the codegen command specifying an input size of 224-by-224-by-3. This value corresponds to the input layer size of YOLOv2.

cfg = coder.gpuConfig('mex');
cfg.TargetLang = 'C++';
cfg.DeepLearningConfig = coder.DeepLearningConfig('cudnn');
cfg.GenerateReport = true;
inputArgs = {ones(224,224,3,'uint8'),coder.Constant(matFile)};

codegen -config cfg yolov2_detect -args inputArgs
Code generation successful: View report

Run Generated MEX

Set up the video file reader and read the input video. Create a video player to display the video and the output detections.

videoFile = 'highway_lanechange.mp4';
videoFreader = VideoReader(videoFile);
depVideoPlayer = vision.DeployableVideoPlayer('Size','Custom','CustomSize',[640 480]);

Read the video input frame-by-frame and detect the vehicles in the video using the detector.

while hasFrame(videoFreader)
    I = readFrame(videoFreader);
    in = imresize(I,[224,224]);
    out = yolov2_detect_mex(in,matFile);
    depVideoPlayer(out);
    % Exit the loop if the video player figure window is closed
    cont = hasFrame(videoFreader) && isOpen(depVideoPlayer); 
end

References

[1] Redmon, Joseph, and Ali Farhadi. "YOLO9000: Better, Faster, Stronger." 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2017.

See Also

Functions

Objects

Related Examples

More About