Image Processing: Find Edge with Highest Contrast

Is it possible to scan an image to find only the edge with the highest dynamic range to its left and right. At first I want to find a vertical edge that has a dark collection of pixels to its left, and a bright collection of pixels to its right (i.e. building | edge | sky). I will then extend this to "sky | edge | building", and the same for the horizontal direction.
The edges themselves should be strong edges, where an object meets a background, for example. It should not choose edges that are part of the texture of an object. I have been experimenting with segmentation to remove some of the detail first, but I'm not convinced that this is the most efficient approach...
I have tried using edge operators such as sobel in the 'vertical' mode, but these operators only work on grayscale matrices... Since I lose a lot of pixel information when converting to grayscale I would prefer to come up with a way of processing this on a colour image directly...
Your suggestions would be gratefully appreciated!

Respuestas (2)

David Young
David Young el 16 de Sept. de 2011
I think there are two main issues here. The first is how to distinguish between "texture" and "the boundary of an object". There's no absolutely reliable way to do this (when is an object an object and not a surface marking?) but the general way to tackle this with fairly straightforward methods is to use the idea of scale, implemented via image smoothing. This gets extended to the powerful concept of scale-space, which gets applied in lots of ways from resolution pyramids to SIFT features, so it's well worth getting to grips with.
Here's some code to look at, as a simple implementation of what you asked about. It identifies the position of maximum left-right contrast at a scale determined by the value of sigma - try changing this to see what happens. The code can be modified to find more large-contrast locations if necessary, and the change to horizontal rather vertical edges is trivial.
im = imread('pout.tif'); % data
% smooth the image
sigma = 8; % how much to smooth
hmasksize = ceil(2.6 * sigma); % reasonable half mask size relative to sigma
masksize = 2*hmasksize + 1; % mask size odd number so it has a centre
mask = fspecial('gauss', masksize, sigma);
imsmooth = conv2(double(im), mask, 'valid');
% find horizontal differences, to pick out vertical edges
hordiffs = imsmooth(:, 1:end-1) - imsmooth(:, 2:end);
% find the biggest absolute difference
[colmxs, rs] = max(abs(hordiffs),[],1);
[mx, c] = max(colmxs);
r = rs(c);
% correct for the trimming during the convolution
c = c + hmasksize;
r = r + hmasksize;
% show the peak location
imshow(im);
hold on;
plot(c, r, 'r^');
hold off;
The second issue is how to handle colour images. The two main possibilities are (a) to form a single intensity image, and then proceed as above, or (b) to independently find edges for the different colour planes, and then somehow combine them.
To do (a), assuming you're starting from an RGB image (converted to double data type) you could combine the colours something like this:
im = k1*rgb(:,:,1) + k2*rgb(:,:,2) + k3*rgb(:,:,3);
where k1, k2 and k3 are constants to be chosen by experiment or using machine learning. These will depend on the kind of colour contrasts you want to find - for example if the blue component is particularly important, k3 would be larger than the others.
For approach (b), you could apply edge detection separately to the r, g and b components (or to the h, s and v components, or whatever you want to use) and then see which result has the strongest edges.
The are more complex variants on all of this - to say more would need a little research project on the particular data you're working with.

11 comentarios

Philip
Philip el 16 de Sept. de 2011
Thank you so much for your in-depth response! I have tried the code you provided, and the initial results look quite promising. I'm embarrassed to say that I don't know anything about the techniques you describe, but will try to find some papers that might be useful. Can you recommend any reading material?
Is it also possible to rank the edge points that are found using the above code? I am wondering if I can perform some tests on the "strongest" edge, and if it's not suitable, then shift to the next one..?
Thank you again for your kind help. I am very grateful!!
Sean de Wolski
Sean de Wolski el 16 de Sept. de 2011
Very interesting David, +1. Philip - perhaps you could post a sample image?
Philip
Philip el 16 de Sept. de 2011
Here is one image, where it currently detects the tree and the sky (obviously) instead of the building...
http://imageshack.us/photo/my-images/265/nh47.jpg/
I just need a rule in there to say, perhaps, that only straighter edges (such as the building) should be considered...
Philip
Philip el 18 de Sept. de 2011
In order to avoid the algorithm picking up tree edges, I have come up with the following strategy but I need a bit of help to know how to sort the data from the above algorithm, to go to the second maximum value in 'hordiffs' if the first selection is not good.
1. I have used 'bwlabel' to uniquely label all connected edges (these are the only ones I am interested in).
2. When an edge is found from the above algorithm, I find the corresponding label for that edge.
3. If the number of edges that make up that label is less than a certain threshold then re-run the procedure.
The other problem with this is that the next run is likely to find another tree edge, and this will happen many many times until eventually there are no more tree edges left to consider. The overhead is therefore huge. Is there a better way to do this?
bym
bym el 18 de Sept. de 2011
maybe use the Hough transform to pick lines of interest an confine your search along that line(s)
Image Analyst
Image Analyst el 18 de Sept. de 2011
Your steps 1,2,3 don't make sense. Steps 1 and 2 are essentially the same - assigning labels to edges. Step 3 doesn't make sense because each set of connected pixels that makes up an edge has just one label number - you don't have multiple edges per label. Yes, there are a number of approaches that would probably be better and more robust than your ad hoc approach. Too many to list here. Perhaps this approach might work for you: http://ww1.ucmss.com/books/LFS/CSREA2006/IPC4735.pdf but I know there are many, many other methods to classify objects at a high level (building, tree, sky, etc.)
Philip
Philip el 19 de Sept. de 2011
Thank you both for your responses. I am not familiar with the Hough transform, so I'll look into this now and see how it can be applied.
Image Analyst - Actually, when I loop through the labelled image and "find" each label number from 1:max, I can use the quantity output to tell me how many edges appear with that value. This is what I proposed for step 3. Step 2 differs from step 1 because I am finding the unique label number based on the edge that was found from David's example, where as step 1 simply labels all edges. I admit, however, that this is an extremely naive attempt, but have been playing around with segmentation techniques for a while with no success. The problem I have is that performing edge detection on these images still gives me too many "inside" lines where as I am only generally interested in the object edges. So I would still require a way to discard and accept certain edges. Can you think of any other approaches besides segmentation that might work for classifying objects?
David's approach seems to be the best, as for the 50 images I have tested it on, it has worked for 46 of them really well. The problem is simply that I would like to rank the suggestions from David's algorithm so that I can apply certain checks to the results. If it passes the checks it can be accepted, but if not, it should move to the next suggestion.
Image Analyst
Image Analyst el 19 de Sept. de 2011
There is no "quantity output" from regionprops. There is an "Area" and a "PixelIdxList." You must be saying that each pixel in a labeled region is a separate, individual edge, which is pretty non-standard usage of the term "edge."
I'm also not sure why David blurred in 2D and then picked out the strongest vertical edge rather than just doing a 1D convolution and finding the strongest edge. Maybe to suppress some noise that might give spurious edges?
Philip
Philip el 19 de Sept. de 2011
I think we may be mixing words here... I do not use regionprops to do this - rather, 'bwlabel' as mentioned. As I see it, the 'num' output can be considered a quantity because it is the number of connected objects found. I then refer to each connected edge object separately, but note that some of these objects will vary in size. The object corresponding to the edge running down the side of a building is likely to be longer than that of any edge associated with trees, as trees seem to be largely made up of multiple objects, rather than a singleton.
Yes, I believe the idea of blurring in 2D is to suppress noise and to reduce spurious edges...
David Young
David Young el 22 de Sept. de 2011
@Image Analyst: You're right - I needn't have done 2D blurring - 1D would have been sufficient for finding vertical edges. However, if edges at other orientations were needed, I guess the 2D blurred image might be useful. (The reason for blurring at all is to try to distinguish between "texture" and "object boundary" in the hope that the former is characterised by a smaller spatial scale.)
David Young
David Young el 22 de Sept. de 2011
@Philip: I support the suggestion of having a look at the Hough transform. It's a very useful idea to know about, at least.

Iniciar sesión para comentar.

srinivasan
srinivasan el 9 de En. de 2014

0 votos

How to compare the extracted features with the image in order to perform object detection.

Categorías

Más información sobre Image Processing Toolbox en Centro de ayuda y File Exchange.

Preguntada:

el 16 de Sept. de 2011

Respondida:

el 9 de En. de 2014

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by