detect
Syntax
Description
detects faces within a single image or an array of images, bboxes
= detect(detector
,I
)I
, using a
pretrained RetinaFace face detector, detector
. The
detect
function returns the locations of detected faces in the input
image as a set of bounding boxes.
Note
This functionality requires Deep Learning Toolbox™ and the Computer Vision Toolbox™ Model for RetinaFace Face Detection. You can install the Computer Vision Toolbox Model for RetinaFace Face Detection from Add-On Explorer. For more information about installing add-ons, see Get and Manage Add-Ons.
returns a table containing the predicted face bounding boxes, their associated confidence
scores, and the corresponding labels for all the images in the input datastore
detectionResults
= detect(detector
,ds
)ds
.
[___] = detect(___,
detects faces within the rectangular region of interest roi
)roi
, in
addition to any combination of arguments from previous syntaxes.
[___] = detect(___,
specifies options using one or more name-value arguments. For example,
Name=Value
)Threshold=0.75
specifies a detection threshold of 0.75.
Examples
Read an input image into the MATLAB® workspace.
I = imread("visionteam1.jpg");
Create a face detector object using the faceDetector
function. The default configuration of the object loads a small, pretrained RetinaFace deep learning detector for face detection. The small network uses MobileNet-0.25 as the backbone network.
detector = faceDetector
detector = faceDetector with properties: ModelName: "small-network" ClassNames: face InputSize: [640 640]
Detect faces in the image using the detect
function of the faceDetector
object. The detect
function returns bounding boxes, detection scores, and labels for the detected faces.
[bboxes,scores,labels] = detect(detector,I);
Overlay bounding boxes, labels, and scores on the image using the insertObjectAnnotation
function.
detectedImg = insertObjectAnnotation(I,"rectangle",bboxes,scores);
Display the detection results.
table(bboxes,scores,labels)
ans=6×3 table
bboxes scores labels
____________________________________ _______ ______
571.25 70.582 38.566 61.612 0.99237 face
333.02 99.076 31.217 40.832 0.98568 face
217.33 122.22 21.548 31.311 0.96641 face
107.12 124.86 37.822 47.239 0.99897 face
510.94 128.6 29.52 38.307 0.97089 face
648.56 132.91 26.986 38.698 0.99424 face
figure imshow(detectedImg)
Create a face detector object using the faceDetector
function. Specify the detector name as "large-network
". This configuration loads a pretrained RetinaFace face detector with ResNet-50 as the backbone network for face detection. The network has many layers and offers improved detection accuracy.
detector = faceDetector("large-network")
detector = faceDetector with properties: ModelName: "large-network" ClassNames: face InputSize: [640 640]
Read an input image into the MATLAB® workspace.
I = imread("boats.png");
Specify a region of interest (ROI) in the image to detect faces.
roi = [5 400 400 200];
Display the image and the ROI.
roiImg = insertObjectAnnotation(I,"rectangle",roi,"ROI"); figure imshow(roiImg)
Detect faces in the specified ROI using the detect
function of the faceDetector
object.
[bboxes,scores,labels] = detect(detector,I,roi);
Display the computed bounding boxes, scores, and the corresponding labels as a table.
table(bboxes,scores,labels)
ans=2×3 table
bboxes scores labels
____________________________________ _______ ______
253.83 551.54 7.994 10.245 0.94679 face
222.68 557.97 7.6804 10.024 0.9767 face
Overlay bounding boxes and scores on the image using the insertObjectAnnotation
function.
detectedImg = insertObjectAnnotation(roiImg,"rectangle",bboxes,scores);
Display the detection results.
figure imshow(detectedImg)
Create a face detector object using the faceDetector
function. Specify the detector name as "large-network
". This configuration loads a pretrained RetinaFace face detector with ResNet-50 as the backbone network for face detection.
detector = faceDetector("large-network");
Create a VideoReader
object to read video data from a video file.
reader = VideoReader('handshake_right.avi');
Configure a VideoPlayer
object to display the video frames and the face detection results.
videoPlayer = vision.VideoPlayer(Position=[0 0 400 400]);
Read and iterate over each frame in the video using a while
loop. Perform these steps to detect faces and display the detection results.
Step 1: Read the current video frame with the
readFrame
function of theVideoReader
object.Step 2: Detect faces in the video frame using the
detect
function of thefaceDetector
object. Thedetect
function returns bounding boxes and detection scores for the detected faces.Step 3: Overlay bounding boxes and scores on the video frame using the
insertObjectAnnotation
function.Step 4: Display the annotated frame using the
step
function of theVideoPlayer
object.
while hasFrame(reader) % Step 1 % videoFrame = readFrame(reader); % Step 2 % [bboxes,scores] = detect(detector,videoFrame); % Step 3 % videoFrame= insertObjectAnnotation(videoFrame,"rectangle",bboxes,scores); % Step 4 % step(videoPlayer,videoFrame) end
Call the release
function to free up the resources allocated to the VideoPlayer
object.
release(videoPlayer)
Create a face detector object using the faceDetector
function. By default, the function uses the RetinaFace detector with a small backbone network for face detection.
detector = faceDetector;
Create a VideoReader
object to read video data from a video file.
reader = VideoReader('tilted_face.avi');
Configure a VideoPlayer
object to display the video frames and the face detection results.
videoPlayer = vision.VideoPlayer(Position=[0 0 600 600]);
Read and iterate over each frame in the video using a while
loop. Perform these steps to detect faces and display the detection results.
Step 1: Read the current video frame with the
readFrame
function of theVideoReader
object.Step 2: Detect faces in the video frame using the
detect
function of thefaceDetector
object. Thedetect
function returns bounding boxes for the detected faces.Step 3: If the current frame has detected faces, use the helper function
helperBlurFaces
to blur the faces in the frame. The helper function applies Gaussian filtering to blur the areas of the frame defined by the bounding boxes. This effectively obscures the detected faces.Step 4: Display the processed frame using the
step
function of theVideoPlayer
object.
while hasFrame(reader) % Step 1 % videoFrame = readFrame(reader); % Step 2 % bboxes = detect(detector,videoFrame,Threshold=0.2); % Step 3 % if ~isempty(bboxes) videoFrame = helperBlurFaces(videoFrame,bboxes); end % Step 4 % step(videoPlayer,videoFrame) end
Call the release
function to free up the resources allocated to the VideoPlayer
object.
release(videoPlayer)
The helperBlurFaces
function applies Gaussian filtering to the regions defined by each bounding box, which correspond to detected faces.
function I = helperBlurFaces(I,bbox) for j=1:size(bbox,1) xbox = round(bbox(j,:)); subImage = imcrop(I,xbox); blurred = imgaussfilt(subImage,12); I(xbox(2):xbox(2)+xbox(4),xbox(1):xbox(1)+xbox(3),1:end) = blurred; end end
Input Arguments
Pretrained RetinaFace face detector, specified as a faceDetector
object. The face detector has been trained on the WIDER FACE data set.
Test images, specified as one of these values:
A matrix of form H-by-W for a grayscale image.
A 3-D numeric array of form H-by-W-by-3 for an RGB image.
A 4-D numeric array of form H-by-W-by-C-by-T for a batch of test images.
H and W are the height and width of the images, respectively. C is the number of color channels. The value of C is 1 for grayscale images and 3 for RGB images. T is the number of images in the batch.
When the test image size does not match the network input size, the detector resizes
the input image to the value of the InputSize
property of detector
, unless you
specify AutoResize
as
false
.
The detector is sensitive to the intensity range of the test images. It was trained on images with an intensity range of [0, 255]. For accurate results, ensure that the test images also have an intensity range of [0, 255].
Data Types: uint8
| uint16
| int16
| double
| single
Datastore of test images, specified as an ImageDatastore
object, CombinedDatastore
object,
or TransformedDatastore
object containing the full filenames of the test images. The images in the datastore
must be grayscale or RGB images.
Region of interest (ROI) to search, specified as a vector of the form
[x
y
width
height]. The first two elements of the vector specify the coordinates
of the upper-left corner of a region, and the third and fourth elements specify the size
of that region, in pixels. If the input data is a datastore, the
detect
function applies the same ROI to every image in the
datastore.
Note
To specify the ROI to search, you must specify AutoResize
value
as true
, enabling the function to automatically resize the input
test images to the network input size.
Name-Value Arguments
Specify optional pairs of arguments as
Name1=Value1,...,NameN=ValueN
, where Name
is
the argument name and Value
is the corresponding value.
Name-value arguments must appear after other arguments, but the order of the
pairs does not matter.
Example: detect(detector,I,Threshold=0.25)
specifies a detection
threshold of 0.25.
Detection threshold, specified as a scalar in the range [0, 1]
.
The function removes detections that have scores less than this threshold value.
To reduce false positives, at the possible expense of missing some detections, increase this value.
To increase the sensitivity of the detector for detecting faces under challenging lighting conditions, pose variations, and occlusion, decrease the detection threshold. However, this might result in false positives.
Strongest bounding box selection, specified as a numeric or logical
1
(true
) or 0
(false
).
true
— Returns only the strongest bounding box for each detected face. Thedetect
function calls theselectStrongestBboxMulticlass
function, which uses nonmaximal suppression to eliminate overlapping bounding boxes based on their confidence scores.By default, the
detect
function uses this call to theselectStrongestBboxMulticlass
function.selectStrongestBboxMulticlass(bboxes,scores, ... RatioType="Union", ... OverlapThreshold=0.45);
false
— Return all detected bounding boxes. You can write a custom function to eliminate overlapping bounding boxes.
Minimum region size containing a face, specified as a vector of the form
[height
width]. Units are in pixels. The minimum region size defines the
size of the smallest face in the test image. When you know the minimum size, you can
reduce computation time by setting MinSize
to that value.
Maximum region size, specified as a vector of the form [height width]. Units are in pixels. The maximum region size defines the size of the largest face in the test image.
By default, MaxSize
is set to the height and width of the
input image I
. To reduce computation time, set this value to the
known maximum region size in which to detect a face in the input test image.
Minimum batch size, specified as a positive integer. Adjust the
MiniBatchSize
value to help process a large collection of
images. The detect
function groups images into minibatches of the
specified size and processes them as a batch, which can improve computational
efficiency at the cost of increased memory requirements. Decrease the minibatch size
to use less memory.
Automatic resizing of the input images to preserve the aspect ratio, specified as
a numeric or logical 1
(true
) or
0
(false
). When you specify
AutoResize
as true
, the
detect
function resizes images to the nearest
dimension, while preserving the
aspect ratio. Specify InputSize
AutoResize
as false
when
performing image tiling-based inference, or inference at full test image size.
Hardware resource on which to run the detector, specified as one of these values:
"auto"
— Use a GPU if Parallel Computing Toolbox™ is installed and a supported GPU device is available. Otherwise, use the CPU."gpu"
— Use the GPU. To use a GPU, you must have Parallel Computing Toolbox and a CUDA® enabled NVIDIA® GPU. If a suitable GPU is not available, the function returns an error. For information about the supported compute capabilities, see GPU Computing Requirements (Parallel Computing Toolbox)."cpu"
— Use the CPU.
Performance optimization, specified as one of these options:
"auto"
— Automatically apply a number of compatible optimizations suitable for the input network and hardware resource."mex"
— Compile and execute a MEX function. This option is available only when using a GPU. Using a GPU requires Parallel Computing Toolbox and a CUDA-enabled NVIDIA GPU. If Parallel Computing Toolbox or a suitable GPU is not available, then thedetect
function returns an error. For information about the supported compute capabilities, see GPU Computing Requirements (Parallel Computing Toolbox)."none"
— Do not use acceleration.
Using the Acceleration
options "auto"
and
"mex"
can offer performance benefits on subsequent calls with
compatible parameters, at the expense of an increased initial run time. Use
performance optimization when you plan to call the function multiple times using new
input data.
The "mex"
option generates and executes a MEX function based on
the network and parameters used in the function call. You can have several MEX
functions associated with a single network at one time. Clearing the network variable
also clears any MEX functions associated with that network.
The "mex"
option is available only for input data specified as
a numeric array, cell array of numeric arrays, table, or image datastore. No other
types of datastore support the "mex"
option.
The "mex"
option is available only when you are using a GPU.
You must also have a C/C++ compiler installed. For setup instructions, see Set Up Compiler (GPU Coder).
"mex"
acceleration does not support all layers. For a list of
supported layers, see Supported Layers (GPU Coder).
Output Arguments
Locations of the detected faces within the input image or images, returned as one of these options:
M-by-4 matrix — Returned when the input is a single test image. M is the number of bounding boxes detected in an image. Each row of the matrix is of the form [x y width height]. The x and y values specify the coordinates of the upper-left corner, and width and height specify the size, of the corresponding bounding box, in pixels.
B-by-1 cell array — Returned when the input is a batch of images, where B is the number of test images in the batch. Each cell in the array contains an M-by-4 matrix specifying the bounding boxes detected within the corresponding image.
Detection confidence scores for each bounding box, returned as one of these options:
M-by-1 numeric vector — Returned when the input is a single test image. M is the number of bounding boxes detected in the image.
B-by-1 cell array — Returned when the input is a batch of test images, where B is the number of test images in the batch. Each cell in the array contains an M-element row vector, where each element indicates the detection score for a bounding box in the corresponding image.
Each confidence score value is in the range [0, 1]
.
Labels for bounding boxes, returned as one of these options:
M-by-1 categorical vector — Returned when the input is a single test image. M is the number of bounding boxes detected in the image.
B-by-1 cell array — Returned when the input is a batch of test images. B is the number of test images in the batch. Each cell in the array contains an M-by-1 categorical vector containing the class name.
By default, the output label value is "face"
.
Detection results when the input is a datastore of test images, ds
, returned as a
table with these columns:
bboxes | scores | labels |
---|---|---|
Predicted bounding boxes, defined in spatial coordinates as an M-by-4 numeric matrix with rows of the form [x y width height], where:
| Confidence scores of the detected class for each bounding box,
returned as an M-by-1 numeric vector with values in the
range | Labels assigned to the bounding boxes, returned as an
M-by-1 categorical vector. By default, the value is
|
References
[1] Deng, Jiankang, Jia Guo, Evangelos Ververas, Irene Kotsia, and Stefanos Zafeiriou. “RetinaFace: Single-Shot Multi-Level Face Localisation in the Wild.” In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 5202–11. Seattle, WA, USA: IEEE, 2020. https://doi.org/10.1109/CVPR42600.2020.00525.
[2] Yang, Shuo, Ping Luo, Chen Change Loy, and Xiaoou Tang. “WIDER FACE: A Face Detection Benchmark.” In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 5525–33. Las Vegas, NV, USA: IEEE, 2016. https://doi.org/10.1109/CVPR.2016.596.
Extended Capabilities
Usage notes and limitations:
The
roi
argument of thedetect
object function must be a code generation constant (coder.const()
) and a 1-by-4 vector.The
AutoResize
name-value argument of thedetect
object function must be a code generation constant (coder.const()
).The
detect
object function supports only theThreshold
,SelectStrongest
,MinSize
,MaxSize
,MiniBatchSize
, andAutoResize
name-value arguments.The
detect
object function does not support code generation for theds
input argument, which specifies a datastore of test images.
Refer to the usage notes and limitations in the C/C++ Code Generation section. The same limitations apply to GPU code generation.
GPU Arrays
Accelerate code by running on a graphics processing unit (GPU) using Parallel Computing Toolbox™.
Version History
Introduced in R2025a
MATLAB Command
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: United States.
You can also select a web site from the following list
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)