Just to reiterate on the problem I want to solve. See the formula for calibrating the image for reference. I have three, 3D arrays. They are white reference image of size (100x1024x448), dark reference image of size (100x1024x448) and image data of size (1100x1024x448). I want to use these arrays to get the calibrated image denoted by R in the formula above.
Proximal Hyperspectral data calibration using white and dark reference images.
37 views (last 30 days)
Show older comments
Billy Ram on 23 Nov 2021
Commented: Image Analyst on 1 Dec 2021
I am using a proximal Hyperspectral (HSI) sensor called Specim FX10 with spectral range of 400-780 nm. These HSI images need to be calibrated with white and dark reference images before further analysis. These reference images are collected everytime with individual data image.
The following is the screenshot of the data collection folder:
The formula used to calibrate the image is:
The way the camera collects the dark reference is by simply closing the shutter and for white reference it captures a small block of teflon. The scanning area (height) for both reference images are very small compared to the data image and that is the problem I think I am having. When I try to do any analysis using these values the array size don't match.('100x1024x448' VS '1100x1024x448') [height x width x bands]
Difference between the value size of dark, white and raw data:
This is the code I am running to get the above values:
I am new to MATLAB and learning how to complete my analysis. Can anyone point out what I am missing or how should I approch this problem? Any additional resources for proximal HSI data analysis would be really helpful. Thanks!
Image Analyst on 30 Nov 2021
I don't think you want W and D to be images. You need them to be scalars so the image size doesn't matter. For each wavelength you just need to get the W and D values, like by taking the mean of the whole image assuming the teflon takes up the whole field of view. So knowing that, you will know the spectral responsivity of your camera/lens system. And therefore the size of the reference image doesn't matter. Though I don't know why the white and dark images would have a different size than the actual test images. And I don't know what "100x1024x448" means. Which is rows, which is columns, and which is wavelengths (or number of images)?
The only reason that the W and D should be images instead of scalars is in situations where the spectral reflectance varies over the image. But that should not be the case if you have a uniform, homogenous block of teflon filling the field of view.
Image Analyst on 1 Dec 2021
If they're already double, then fine. I'm just saying that if you try to subtract uint8 numbers, they will clip to 0 and won't give you the correct difference if that difference would be negative. Observe:
v1 = uint8(9);
v2 = uint8(150);
result = v1-v2
% Now for double
v1 = 9;
v2 = 150;
result = v1-v2
Find more on Image Preview and Device Configuration in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!Start Hunting!