- as plain uint16() integer values in the range 0 to 4095. The "left" 4 bits are stored as 0.
- as "left-justified" uint16() values in the range 0 to 65520. The "right" 4 bits are stored as 0
- as packed streams carried in uint8 arrays, with the top 8 bits of the first value stored in the first uint8, then the bottom 4 bits of the first value together with the top 4 bits of the second value stored in the second uint8, then the bottom 8 bits of the second value stored in the third uint8. Repeat the pattern of 3 uint8 used to store two 12-bit values
How do I process and display a 10 and 12 bit image in matlab?
36 visualizaciones (últimos 30 días)
Mostrar comentarios más antiguos
SnooptheEngineer
el 11 de Feb. de 2025
Comentada: Image Analyst
el 13 de Feb. de 2025
I am using an allied vision camera to take pictures of a laser. Using 12 bit I can see the image clearly. While using 10 and 12 bit mode I cant see the image (file I saved). I am assuming thats because matlab can only handle 8 or 16 bit so some scalling is needed. here is my code for the 12 bit. The goal from using the 12 bit and 10 bit more is to get a larger dynamic range and finer resolution for my image so I can identify borders and peaks better.
src = getselectedsource(v);
src.ExposureTime=5000
snapshot2 = getsnapshot(v);
f = figure;
ax = axes(f);
imshow(snapshot2, "Parent", ax);
imwrite(snapshot2, 'png') %used png since jpeg cant handle 8 bit images
delete(v)
clear src v
0 comentarios
Respuesta aceptada
Walter Roberson
el 11 de Feb. de 2025
There are three primary ways of storing 12 bit images:
Now, regardless of the internal representation, the arrays returned by getsnapshot() are already unpacked -- but they might be unpacked on the right or they might be unpacked on the left.
If they are unpacked on the left, then
v12 = bitshift(v, -4);
imshow(v12)
clim([0 4095])
Más respuestas (2)
Rohan Kadambi
el 11 de Feb. de 2025
Editada: Rohan Kadambi
el 12 de Feb. de 2025
MATLAB's only has built in numeric types for 8,16,32,64 unsigned integers (as do most languages). Typically, when 10 or 12 bit images get encoded, they get sored in 16 bit arrays.
The simples way of displaying the image is to cast the array to a floating point type and rescale it as approriate:
% Native 8-bit
I = imread("8bitimage.png");
imshow(I);
% 10-bit in 16-bit array
I = imread("10bitimage.png");
% Typo corrected by Walter
% imshow(rescale(double(I),0,2^10-1));
imshow(rescale(double(I), InputMin=0, InputMax=2^10-1));
% 12-bit in 16-bit array
I = imread("12bitimage.png");
% Typo corrected by Walter
% imshow(rescale(double(I),0,2^12-1));
imshow(rescale(double(I), InputMin=0, InputMax=2^12-1));
The call of double(I) which explicitly casts the uint16 to double is not necessary but is generally good practice when converting integer types.
3 comentarios
Walter Roberson
el 12 de Feb. de 2025
Editada: Walter Roberson
el 12 de Feb. de 2025
When I is encoded in 16 bits, then
rescale(double(I),0,2^12-1)
is double precision.
If the original data range is 0 to 4095 exactly then the rescale() will effectively leave the data unchanged, but double precision. imshow() will then have problems with that, as imshow() assumes that double precision data is in the range 0 to 1 and so will saturate most of the image.
If the original data range is 0 to something less than 4095 (for example 4064) then the actual range occupied by the data will be stretched to 0 to 4095, potentially ending up with values that include fractions.
If the original data range is 0 to 65520 exactly and the lower 4 bits are always zero, then rescale() will effectively divide the data by 16, but double precision. Same problems about imshow() versus double precision.
If the original data range is 0 to something less than 65520 and the lower 4 bits are always zero, then rescale() will effectively compress the data by a factor a little less than 16, potentially ending up with values that include fractions.
Solution:
If the original data range is 0 to at most 4095, then leave the data unchanged in its uint16 form: it is already scaled properly in this case.
If the original data range is 0 to at most 65520 and the lower 4 bits are always zero, then
newI = uint16(rescale(double(I), 0, 2^12-1, InputMin=0, InputMax=65520))
or, more simply,
newI = I ./ 16;
Rohan Kadambi
el 12 de Feb. de 2025
Ah, sorry for the typo, I mis-remembered the documentation for rescale() the correct rescaling to double is:
I_new = rescale(double(I), InputMin=0, InputMax=2^n_bit-1)
Where n_bit is 10 or 12 depending on your bit-depth. You can pass this I_new to imshow() and it should look the same.
Personally, I have no reason to work with integer types when analyzing images and prefer to switch to double to avoid accidentally truncation. The discrtization implicit in Integer types is rarely relevent to my workflows so mapping from Integer(0,2^12-1) to Float(0,1) is entirely equivilant. I think mapping from Integer(0,2^12-1) to Integer(0,2^16-1) is misleading because you're bringing back a discritization but it's not the true one from the orignal data. If you do need to track the discritization after rescaling to Float(0,1), it's trivial to calculate as 1/(2^n_bit) and you can use this in code as relevant.
Image Analyst
el 13 de Feb. de 2025
Use [] to scale the actual data to whatever dynamic range your display can handle. For exampls
imshow(snapshot2, [], "Parent", ax);
It doesn't matter if your data goes from 0 - 0.01 or 0 - 99999, and doesn't matter what data type the variable is -- it will handle it correctly.
2 comentarios
Image Analyst
el 13 de Feb. de 2025
That should not be a problem, regardless if you're using the upper 12 bits or the lower 12 bits. If you need help with analyzing the image, like with segmentation, post your image and say what you want to measure.
If you need to measure things like length and area, it is probably not necessary to do any scaling at all. If however you need to do things like measure intensity/brightness, then you will have do a radiometric calibration. Don't think that if you just do something like threshold at some gray level that you're measuring intensity - for that you don't need to calibrate. I'm thinking like if you need to measure the color of your regions to compare them with the CIELAB color values of a spectrophotometer, or you need to convert your pixel gray levels from an x-ray into some ground truth intensity like mass attenuation coefficient in units of layers of aluminum. Or you want to linearize your camera (remove the gamma that cameras usually apply) so that twice the gray level means twice the optical intensity. A normal photo is not linear, unless you specifically told your camera to use a gamma of 1 (machine vision cameras can do that). So in a regular snapshot, a gray level of 50 is not twice the physical brightness as a gray level of 25, and some part of your scene that has a gray level of 200 is not twice as bright as some part of the scene that has a gray level of 100. Not very intuitive to most people but if you need an explanation see https://en.wikipedia.org/wiki/Gamma_correction or ask me.
Ver también
Categorías
Más información sobre Image Processing and Computer Vision en Help Center y File Exchange.
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!