How do I crop an image outside of box?

33 visualizaciones (últimos 30 días)
David Prego
David Prego el 16 de Abr. de 2024 a las 14:56
Comentada: David Prego el 19 de Abr. de 2024 a las 2:18
I would like to crop the image outside of the box and just have the content in the box as an image. I am using the computer vision toolbox example 'Feature Based Object Detection Sample'. The program collects coordinates from the computer vision which are used to created the lines and the coordinates are stored in the newElephantPolygon. I would like to use these coordinates are the cropping area.
The program currently displays this:
I would like it to output this:
The current code I am using:
%% Object Detection in a Cluttered Scene Using Point Feature Matching
% This example shows how to detect a particular object in a cluttered scene,
% given a reference image of the object.
%% Overview
% This example presents an algorithm for detecting a specific object based
% on finding point correspondences between the reference and the target
% image. It can detect objects despite a scale change or in-plane
% rotation. It is also robust to small amount of out-of-plane rotation and
% occlusion.
%
% This method of object detection works best for objects that exhibit
% non-repeating texture patterns, which give rise to unique feature matches.
% This technique is not likely to work well for uniformly-colored objects,
% or for objects containing repeating patterns. Note that this algorithm is
% designed for detecting a specific object, for example, the elephant in
% the reference image, rather than any elephant. For detecting objects of a
% particular category, such as people or faces, see |vision.PeopleDetector|
% and |vision.CascadeObjectDetector|.
% Copyright 1993-2022 The MathWorks, Inc.
%% Step 1: Read Images
% Read the reference image containing the object of interest.
boxImage = imread('stapleRemover.jpg');
figure;
imshow(boxImage);
title('Image of a Box');
%%
% Read the target image containing a cluttered scene.
sceneImage = imread('clutteredDesk.jpg');
figure;
imshow(sceneImage);
title('Image of a Cluttered Scene');
%% Step 2: Detect Point Features
% Detect point features in both images.
boxPoints = detectSURFFeatures(boxImage);
scenePoints = detectSURFFeatures(sceneImage);
%%
% Visualize the strongest point features found in the reference image.
figure;
imshow(boxImage);
title('100 Strongest Point Features from Box Image');
hold on;
plot(selectStrongest(boxPoints, 100));
%%
% Visualize the strongest point features found in the target image.
figure;
imshow(sceneImage);
title('300 Strongest Point Features from Scene Image');
hold on;
plot(selectStrongest(scenePoints, 300));
%% Step 3: Extract Feature Descriptors
% Extract feature descriptors at the interest points in both images.
[boxFeatures, boxPoints] = extractFeatures(boxImage, boxPoints);
[sceneFeatures, scenePoints] = extractFeatures(sceneImage, scenePoints);
%% Step 4: Find Putative Point Matches
% Match the features using their descriptors.
boxPairs = matchFeatures(boxFeatures, sceneFeatures);
%%
% Display putatively matched features.
matchedBoxPoints = boxPoints(boxPairs(:, 1), :);
matchedScenePoints = scenePoints(boxPairs(:, 2), :);
figure;
showMatchedFeatures(boxImage, sceneImage, matchedBoxPoints, ...
matchedScenePoints, 'montage');
title('Putatively Matched Points (Including Outliers)');
%% Step 5: Locate the Object in the Scene Using Putative Matches
% |estgeotform2d| calculates the transformation relating the
% matched points, while eliminating outliers. This transformation allows us
% to localize the object in the scene.
[tform, inlierIdx] = estgeotform2d(matchedBoxPoints, matchedScenePoints, 'affine');
inlierBoxPoints = matchedBoxPoints(inlierIdx, :);
inlierScenePoints = matchedScenePoints(inlierIdx, :);
%%
% Display the matching point pairs with the outliers removed
figure;
showMatchedFeatures(boxImage, sceneImage, inlierBoxPoints, ...
inlierScenePoints, 'montage');
title('Matched Points (Inliers Only)');
%%
% Get the bounding polygon of the reference image.
boxPolygon = [1, 1;... % top-left
size(boxImage, 2), 1;... % top-right
size(boxImage, 2), size(boxImage, 1);... % bottom-right
1, size(boxImage, 1);... % bottom-left
1, 1]; % top-left again to close the polygon
%%
% Transform the polygon into the coordinate system of the target image.
% The transformed polygon indicates the location of the object in the
% scene.
newBoxPolygon = transformPointsForward(tform, boxPolygon);
%%
% Display the detected object.
figure;
imshow(sceneImage);
hold on;
line(newBoxPolygon(:, 1), newBoxPolygon(:, 2), Color='y');
title('Detected Box');
%% Step 6: Detect Another Object
% Detect a second object by using the same steps as before.
%%
% Read an image containing the second object of interest.
elephantImage = imread('elephant.jpg');
figure;
imshow(elephantImage);
title('Image of an Elephant');
%%
% Detect and visualize point features.
elephantPoints = detectSURFFeatures(elephantImage);
figure;
imshow(elephantImage);
hold on;
plot(selectStrongest(elephantPoints, 100));
title('100 Strongest Point Features from Elephant Image');
%%
% Extract feature descriptors.
[elephantFeatures, elephantPoints] = extractFeatures(elephantImage, elephantPoints);
%%
% Match Features
elephantPairs = matchFeatures(elephantFeatures, sceneFeatures, MaxRatio=0.9);
%%
% Display putatively matched features.
matchedElephantPoints = elephantPoints(elephantPairs(:, 1), :);
matchedScenePoints = scenePoints(elephantPairs(:, 2), :);
figure;
showMatchedFeatures(elephantImage, sceneImage, matchedElephantPoints, ...
matchedScenePoints, 'montage');
title('Putatively Matched Points (Including Outliers)');
%%
% Estimate Geometric Transformation and Eliminate Outliers
[tform, inlierElephantPoints, inlierScenePoints] = ...
estimateGeometricTransform(matchedElephantPoints, matchedScenePoints, 'affine');
figure;
showMatchedFeatures(elephantImage, sceneImage, inlierElephantPoints, ...
inlierScenePoints, 'montage');
title('Matched Points (Inliers Only)');
%%
% Display Both Objects
elephantPolygon = [1, 1;... % top-left
size(elephantImage, 2), 1;... % top-right
size(elephantImage, 2), size(elephantImage, 1);... % bottom-right
1, size(elephantImage, 1);... % bottom-left
1,1]; % top-left again to close the polygon
newElephantPolygon = transformPointsForward(tform, elephantPolygon);
figure;
imshow(sceneImage);
hold on;
% line(newBoxPolygon(:, 1), newBoxPolygon(:, 2), Color='y');
line(newElephantPolygon(:, 1), newElephantPolygon(:, 2), Color='g');
title('Detected Elephant and Box');
  2 comentarios
Image Analyst
Image Analyst el 17 de Abr. de 2024 a las 0:07
Why does the image need to be cropped and warped? I see no reason for it. What would you do next if you were able to do it?
DGM
DGM el 17 de Abr. de 2024 a las 1:37
Editada: DGM el 17 de Abr. de 2024 a las 1:42
I don't have CVT, and i'm not super familiar with all of the referencing/transformation tools, but you may be able to control how the transformation output is cropped -- within the transformation process itself.
I know these are not the same tools, but at least with imwarp(), it can be controlled by setting the 'outputview' parameter.
% an image
A = imread('wagon.jpg');
imshow(A)
% i don't have an estimated transformation, so i'm just going to make one
% these are the coordinates of the box corners
boxm = [175 146; 424 216; 440 611; 160 610]; % [x y]
% assert that this is where they're supposed to be
% any coordinates that define a rectangle (the output image)
boxf = [1 1; 250 1; 250 400; 1 400]; % [x y]
% this is my made-up transformation to extract part of the wagon box
TF = fitgeotrans(boxm,boxf,'projective');
% this controls the output extents
outview = imref2d(fliplr(range(boxf,1)+1));
% apply everything
B = imwarp(A,TF,'fillvalues',255,'outputview',outview);
imshow(B)
That's not really an answer, but maybe it's something to look for in the documentation.

Iniciar sesión para comentar.

Respuesta aceptada

DGM
DGM el 19 de Abr. de 2024 a las 1:27
Movida: DGM el 19 de Abr. de 2024 a las 1:28
I guess I can try this in the forum editor at least.
%% Step 1: Read Images
% Read the target image containing a cluttered scene.
sceneImage = imread('clutteredDesk.jpg');
figure;
imshow(sceneImage);
title('Image of a Cluttered Scene');
%% Step 6: Detect Another Object
% Read an image containing the second object of interest.
elephantImage = imread('elephant.jpg');
figure;
imshow(elephantImage);
title('Image of an Elephant');
%%
% Detect and visualize point features.
scenePoints = detectSURFFeatures(sceneImage);
elephantPoints = detectSURFFeatures(elephantImage);
figure;
imshow(elephantImage);
hold on;
plot(selectStrongest(elephantPoints, 100));
title('100 Strongest Point Features from Elephant Image');
%%
% Extract feature descriptors.
[sceneFeatures, scenePoints] = extractFeatures(sceneImage, scenePoints);
[elephantFeatures, elephantPoints] = extractFeatures(elephantImage, elephantPoints);
%%
% Match Features
elephantPairs = matchFeatures(elephantFeatures, sceneFeatures, MaxRatio=0.9);
%%
% Display putatively matched features.
matchedElephantPoints = elephantPoints(elephantPairs(:, 1), :);
matchedScenePoints = scenePoints(elephantPairs(:, 2), :);
figure;
showMatchedFeatures(elephantImage, sceneImage, matchedElephantPoints, ...
matchedScenePoints, 'montage');
title('Putatively Matched Points (Including Outliers)');
%%
% Estimate Geometric Transformation and Eliminate Outliers
[tform, inlierElephantPoints, inlierScenePoints] = ...
estimateGeometricTransform(matchedElephantPoints, matchedScenePoints, 'affine');
figure;
showMatchedFeatures(elephantImage, sceneImage, inlierElephantPoints, ...
inlierScenePoints, 'montage');
title('Matched Points (Inliers Only)');
%%
% Display Both Objects
elephantPolygon = [1, 1;... % top-left
size(elephantImage, 2), 1;... % top-right
size(elephantImage, 2), size(elephantImage, 1);... % bottom-right
1, size(elephantImage, 1);... % bottom-left
1,1]; % top-left again to close the polygon
newElephantPolygon = transformPointsForward(tform, elephantPolygon);
figure;
imshow(sceneImage);
hold on;
line(newElephantPolygon(:, 1), newElephantPolygon(:, 2), Color='g');
title('Detected Elephant and Box');
% if there isn't a CVT way, i guess you could just use this?
TF = fitgeotrans(newElephantPolygon,elephantPolygon,'projective');
% this controls the output extents
outview = imref2d(size(elephantImage));
% apply everything
extractedelephant = imwarp(sceneImage,TF,'fillvalues',255,'outputview',outview);
figure
imshow(extractedelephant)
Still doesn't seem quite right though
  1 comentario
David Prego
David Prego el 19 de Abr. de 2024 a las 2:18
This what I wanted. Thank you so much :)

Iniciar sesión para comentar.

Más respuestas (1)

Taylor
Taylor el 16 de Abr. de 2024 a las 17:22
You can use imcrop
  5 comentarios
Taylor
Taylor el 18 de Abr. de 2024 a las 12:58
Use poly2mask instead of drawpolygon and createMask.
DGM
DGM el 19 de Abr. de 2024 a las 0:56
Editada: DGM el 19 de Abr. de 2024 a las 1:06
What exactly are you proposing? OP has a set of points which form a non-rectangular quadrilateral in the source image. We should have a projective transformation which maps these points and the enclosed ROI into a close-cropped rectangular output image, like described by the attached images. As I see it, the question is simply how to accomplish this with the CVT (and/or IPT) tools. If there isn't some canonical way of doing this within the CVT tools, then perhaps newElephantPolygon (the moving points) and elephantPolygon (the fixed points) can just be used with fitgeotrans() and imwarp() as I demonstrated.
If the question is instead how to crop the source image to the smallest rectangular region which encloses the non-rectangular ROI, then maybe imcrop()/poly2mask() makes sense, but that doesn't seem like what's being asked.

Iniciar sesión para comentar.

Productos


Versión

R2024a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by