Partially bluring image with averaging filter
9 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
i created this function in order to to partially blur an image :
function [ T ] = floutage(I,XA,YA,XB,YB)
H=fspecial('average',[11 11]);
t=0;
for i=XA:XB
for j=YA:YB
t=imfilter(I(i,j),H) ;
I(i,j)=t;
end
end
T=I;
end
and i call it in the script like this :
T1=floutage(Iref,10,10,350,350);
figure
imshow(T1);
but the reslut is no bluring but a black box :

0 comentarios
Respuestas (2)
KALYAN ACHARJYA
el 5 de Feb. de 2021
Editada: KALYAN ACHARJYA
el 5 de Feb. de 2021
Have you ever check the following one, what does it means
H=fspecial('average',[11 11]);
it creats 11x11 size kernal with all values 0.008264.. , as you are trying to average 11x11 pixels, same multiply each element with 1/(11x11)= 0.008264
As you are trying to do 11x11 average data elements, but the actual Image data "I" is passing individually I(i,j), it represents the single pixel element.
You are trying to multiplying (convolution) 1 single element with 11x11 matrix data. In imfilter case MATLAB replaces (in I(i,j)) all shortcoming elements with zeros ("0") to make both convolve data with 11x11. The result nearly 0 or max 2.1 in this case. Hence image with 10 to 350 pixel ranges is result as blacken image.
If you consider the maximum pixel value in gray image (uint8) case, which is 255
>> t=imfilter(255,H)
t =
2.10743801652893
In gray scale it represents the naerly complete black region. This is reason why black section shown in the certain part of the result image. Please resolve the issue first.
Suggestion: You may avoid the for loop here, see blocproc function
Also ensure that you have iterate the kernal to the entire image, to get the average of entire image.
Learning is more important than getting the full code
Good Luck!
0 comentarios
Michael Stokes
el 1 de Jul. de 2021
Hey Gahit,
From what I see, it looks like you're trying to write a function that blurs just a portion of an image. Kalyan gave a good description of your problem, which was that you were passing a single pixel to imfilter. In your case, imfilter "pads" this pixel to create an 11x11 matrix because the convolution matrix you passed in was 11x11. It accomplishes this by replacing the "missing" pixels with the value of zero (referred to as "padding"). Here's a good explanation of why we need to pad images:
https://d2l.ai/chapter_convolutional-neural-networks/padding-and-strides.html
Here's my own solution to your problem.
function [ T ] = floutage(I,XA,YA,XB,YB)
% Create the convolution matrix
H=fspecial('average',[11 11]);
% Pad the image in the case that the conv2 operation's
% convolution matrix can fall outside of the input matrix boundaries
paddedImg = padarray(I, [5 5], 0, 'both');
T = I;
% Overwrite the portion of the output image with its convolution.
% Increase the offsets (XA-5:XB+5, YA-5:YB+5) by 5 because we've added 5 rows,
% columns due to padding
T(XA:XB,YA:YB) = conv2(paddedImg(XA:XB+10, YA:YB+10), H, 'valid');
end
One thing to be aware of is that the conv2 operator normally operates on a matrix that is larger than the matrix you pass in when we specify 'same'. It accomplishes this by padding its input matrix with zeros. This is to handle pixels where the convolution matrix would overlap the input matrix boundary. So if we were to simply do conv2(I(XA:XB, YA:YB), 'same'), the input matrix, I(XA:XB, YA:YB), would be padded with zeros (equivalent to padarray(I(XA:XB, YA:YB), [5 5], 0, 'both)).
From your example, I think you don't want to use this zero padding. So instead of specifying 'same', we specify 'valid', so the convolution matrix won't pad the input matrix. However, this means that the output matrix will be smaller than the input matrix because we only apply the convolution matrix to the center portion of the input matrix. This means we need to pass in a larger region of the input matrix if we want the output matrix to be the same size (XA:XB, YA:YB). So, instead of passing in I(XA:XB,YA:YB), I pass in I(XA-5:XB+5,YA-5:YB+5).
Finally, I wanted to handle the case where the values XA-5, XB+5, YA-5, or YB+5 fall outside of the input matrix I. To do this, I padded the original image with zeros (paddedImg = padarray(I, [5 5], 0, 'both')) and adjusted the offsets of the input matrix region that we passed to conv2 (paddedImg(XA:XB+10, YA:YB+10)).
This way, we use the input matrix values whenever possible and use padded zero values only when applying the convolution matrix would overlap the original input matrix boundaries.
Hope this helps,
Michael
0 comentarios
Ver también
Categorías
Más información sobre Logical en Help Center y File Exchange.
Productos
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!