How do I compare two images?

Hi
I have two images.
A6(alg).jpg
One was manually segmented and the other used an automated method. Both images segments the same interface, but the automated is cropped for faster computation.
I am looking for a method, to compare the two images, so I can estimate the accuracy of the automated segmentation. How can I compare them? E.g. how can I find whether the red pixels locations at both images are the same. If not, how do I define their differences?
Any ideas?
Thank you

Respuestas (2)

Walter Roberson
Walter Roberson el 18 de Mzo. de 2019

1 voto

In the case where the two images are to the same scale, then use xcorr2 to find the place where the second image best fits into the first, after which you can do whatever comparisons using indexing.
If the images are not the same scale (e.g., the second one looks like it might be higher resolution) then you would need to do image registration in order to find the best match.

5 comentarios

@Walter. xcorr2 seems promising and I face a few hinders.
First of all, let me say that indeed the cropped one has a different resolution. How can I play with image registration to get the difference in pixels, of the red contour?
When I use the xcorr2 command I get the error
Error using conv2
N-D arrays are not supported.
Error in xcorr2 (line 26)
c = conv2(a, rot90(conj(b),2));
I use
c = xcorr2(a,b)
Where a,b the two images respectively
Walter Roberson
Walter Roberson el 19 de Mzo. de 2019
You probably need to crop down to the central area and then rgb2gray to get a 2D array that can be put through xcorr2. However I do not think that xcorr2 is suitable for operating on different scales.
Image Analyst
Image Analyst el 19 de Mzo. de 2019
I also don't think xcorr2() is needed. You know where your automated algorithm cropped it, so you can simply use that information to extract the same rectangular ROI from your original, full-sized image, and THEN do the comparison. However I recommend using the method I mentioned rather than trying to find the red lines from each of the two images. I mean, you already have the coordinates of the red lines anyway since you used something like plot() to put them up there. You just need to subtract the cropping offset from the first one. You can use the (x,y) coordinates of the red lines with poly2mask() to create a binary image from which you compute the Sørensen–Dice simlarity index. I think I have a demo if you want it.
Stelios Fanourakis
Stelios Fanourakis el 19 de Mzo. de 2019
Yes please, send me the demo.
@Walter. Since the two images, I need to compare are not of the same dimensions, I used imregister
imshowpair(a,e)
>> [optimizer,metric] = imregconfig('multimodal');
>> movingRegisteredDefault = imregister(a,e,'affine',optimizer,metric);
imshowpair(a,e) works fine but imregister not. I get the error
Error using imregtform>parseInputs (line 268)
The value of 'MovingImage' is invalid. All dimensions of the moving image should be greater than 4.
Error in imregtform (line 124)
parsedInputs = parseInputs(varargin{:});
Error in imregister (line 119)
tform = imregtform(varargin{:});
What does that mean?

Iniciar sesión para comentar.

Image Analyst
Image Analyst el 18 de Mzo. de 2019

0 votos

You have two ways of computing the segmentation. Which do YOU consider to be more accurate? If you want to compute accuracy, you must have some ground truth - some segmentation that YOU DEFINE to be the absolutely 100% correct answer. I'm assuming you think the manually traced one is the ground truth and want to see how well the automatic algorithm matches the manual one. To do that, you first need to crop out the regions so that both images have the same field of view (all corners point to the same physical points in the subject/sample in both images). Now you can crop the segmented (binary) images the same way and compare them with a similarity index, for example, the Sørensen–Dice coefficient or friends. See this link.

4 comentarios

Stelios Fanourakis
Stelios Fanourakis el 19 de Mzo. de 2019
@Image Analyst. In case the images are not the same size. What command can I apply to omit this obstacle?
Stelios Fanourakis
Stelios Fanourakis el 19 de Mzo. de 2019
Editada: Stelios Fanourakis el 19 de Mzo. de 2019
Is there a way I can retrieve the [x1,y1,x2,y2] coordinates of an image?
Actually, there is a problem here.
Since, the dimensions of the images were different at first point. For instance, the cropping size of the automated segmented image, comes from the original, full size, non segmented image.
The manually segmented image (ground truth), comes from a shorter version of the original, full size, non segmented image.
Now, I have to compare the auto segmented with the manual segmented. Both images are coming from different dimension images.
How can I solve this issue?
@Image Analyst
I tried your Dice suggestion and after converting the images to double I got those results.
How do I interpret them to similiarity index? Does 0 mean no similarity and anything above it, it has a degree of similarity?? What are those numbers?
Finally, in the overall images, how do I know the similarity index for only the red pixels of the contour?
>> similarity = dice(w,q);
>> disp(similarity)
0
0
0
0
0
0
0
0
0
0.0055
0
0.0050
0
0
0
0
0
0
0
0
0
0
0
0.0023
0
0
0
0.0017
0.0011
0.0020
0.0025
0.0005
0.0013
0.0060
0.0017
0.0043
0.0025
0.0008
0.0012
0.0016
0.0023
0.0088
0.0016
0.0020
0.0032
0.0046
0.0026
0.0015
0.0058
0.0081
0.0041
0.0033
0.0034
0
0.0031
0.0068
0.0010
0.0069
0.0020
0.0021
0.0041
0.0070
0.0017
0.0070
0.0018
0.0015
0.0006
0.0038
0.0012
0.0026
0.0027
0.0041
0.0035
0.0046
0.0008
0
0.0041
0.0033
0.0034
0.0028
0.0064
0.0019
0
0.0010
0.0049
0.0075
0.0050
0.0035
0.0013
0.0032
0.0034
0.0011
0.0026
0.0039
0.0012
0.0012
0.0011
0.0069
0.0011
0
0.0010
0.0032
0.0048
0
0
0.0159
0.0014
0.0013
0.0014
0.0014
0.0076
0.0011
0.0053
0.0026
0.0013
0.0106
0.0013
0.0080
0.0131
0
0
0.0040
0.0051
0
0
0.0012
0.0024
0
0.0051
0.0040
0.0051
0.0025
0.0012
0.0013
0.0024
0.0098
0.0047
0
0
0
0.0039
0
0.0013
0.0013
0.0049
0.0013
0.0012
0.0014
0.0013
0
0
0
0.0042
0.0014
0
0
0.0040
0.0038
0.0065
0
0
0.0043
0.0085
0
0.0013
0.0042
0
0.0016
0.0046
0
0.0045
0
0
0
0
0
0
0.0016
0.0049
0.0050
0.0018
0
0.0019
0
0.0053
0
0
0
0
0
0.0065
0
0
0
0
0
0
0
0
0.0024
0.0075
0
0
0
0
0
0
0.0078
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0.1467
Stelios Fanourakis
Stelios Fanourakis el 19 de Mzo. de 2019
@Image Analyst.
As I came to realize, imregister works for grayscale images only and it won't allow me to use the red channel for the contour.

Iniciar sesión para comentar.

Preguntada:

el 18 de Mzo. de 2019

Comentada:

el 19 de Mzo. de 2019

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by