finding non-linear gamma model's parameters using lsqnonlin

I am going to do a gamma correction on some images. So according to these equations, I have to find unknown parameters using “ lsqnonlin ” function.
in this problem i have a test target like this:
which i know RGB values of every color-square as (R,G,B)_in. and i captured from this color palate with my camera and i have another RGB values as (R,G,B)_out'. so I am going to estimate A, k with " lsqnonlin function".
After estimating A,k, I have to estimate gamma value from equation (3). but I don’t know how to use " lsqnonlin function" to solve my problem(s).
and initial values are:
Any help will be appreciated. Tanx:)

2 comentarios

Please fix your question - I can't really read it.
sorry for my bad writing. it is now edited.

Iniciar sesión para comentar.

Respuestas (2)

Image Analyst
Image Analyst el 31 de Dic. de 2014
I understand the equations - they're simplified versions of what I've used before. But I really don't think you need to do any of that, at least not in RGB color space. What you really should be doing is RGB to LAB calibration. But first let's talk about what you think you want to do.
So, what is your RGBout? It's the sRGB values - the nominal values - that are supplied in the reference paper in your Color Checker chart by the manufacturer, x-rite. By the way, you can get the chart directly from x-rite - one of the big 4 manufacturers of spectrophotometers: http://xritephoto.com/ph_product_overview.aspx?ID=1192. That's probably the best reference RGB values you have, unless you're trying to specifically match some other camera that you're defining as the master, gold-standard camera. Anyway, the x-rite values are sRGB values and as you can see on the Wikipedia page for sRGB http://en.wikipedia.org/wiki/SRGB there is not a single gamma for sRGB, though there is a rough overall gamma of 2.4. So your reference values that you want to try to match are already non-linear. Well, if you're going to use your second set of equations to map a set or RGB values (Rout', etc.) into non-linear RGB (Rout) using the gamma formula, then the Rout' should be non-linear and the Rout should be linear, which they're not - they already have a gamma built into them if you use your ColorChecker chart sRGB values as the reference values. So what do you get if you do what you showed? Well you get a gamma for mapping a non-linear set of RGB into another non-linear set of RGB. Well if your camera was set up with a gamma of close to 2.4 (say it's 2.3) then the gamma you'll get is for mapping a curve of gamma 2.3 into a curve for gamma of about 2.4. That's not much change at all so the gamma you'll get out is close to 1. Well if a gamma is close to 1 you'll think your camera is linear - which is very deceptive because it was actually 2.3 but you didn't understand the equations so you got out wrong values. You'll think "great, I'm almost linear, just what a CCD should be" but then you'll plot the gray level intensity of the gray chips vs. the Y value of the chips (also supplied by x-rite) and you'll notice that it's not linear - there's a gamma curve of around 2.3 due to your camera. Then you'll scratch your head saying "how could that be when I just showed the gamma is 1?" Well, it's because you did not map your camera's RGB (with it's gamma of 2.3) into linear values, you mapped them into non-linear values close to what you already have, except maybe for a brightness offset.
But what if you're using a good scientific/industrial camera where you can turn the gamma off (set gamma = 1) and get linear RGB values out? Well then your equations would tell you what gamma you'd need to map your linear RGB values into sRGB values. What good is that? None, really. It would merely tell you what Wikipedia did - that the gamma of sRGB is about 2.4. But so what? How does that help you? It doesn't.
OK, so what a complicated mess. I hope you followed it but I wouldn't be surprised if you didn't because it IS complicated and it takes people years to get a good feeling on this color science topic. Even I often get confused.
So, how do you avoid this problem? Well it's an unnecessary problem that you don't even need to worry about, so don't. What you need to do is to characterize your camera so that whatever image it produces, you map that into XYZ color space. Now if you're using a good camera that you can tell it to be linear, then you can map the RGB into XYZ. The XYZ values are supplied by x-rite and you can take them as the "true" values. The great thing is that XYZ is almost linear compared to RGB. In other words, X is pretty much like red, the Y is like green and the Z is like blue, so you can get a fairly linear mapping of RGB into XYZ. This is good because you don't have to have wild, crazy, higher order curves that can throw off the estimate in between your training colors (which are the 24 chip colors).
So then you map RGB into XYZ with a fairly smooth polynomial, but with some higher order terms and cross terms, like X = function of R, G, B, R^2, G^2, B^2, R*G, R*B, and G*B plus an offset term. Then you use least squares to solve for the coefficients mapping all of those terms into an estimated X. Then you do the same for Y and Z. Then you use the analytical equations http://www.easyrgb.com/index.php?X=MATH&H=07#text7 to go from XYZ into LAB. NOW THIS IS WHAT YOU WANT! You have a calibrated system that is independent of exposure level, gamma, etc. You can take a picture of your scene with a color checker chart in it with any camera under any lighting conditions/colors (within reason) and get the same calibrated LAB values out. Now that you have that you just carry our your segmentation algorithm in LAB color space.
OK, I'm sure I've lost you by now. It's hard to impart years of color science training and practice into an Answers message. I suggest you read Charles Poynton's COlor FAQ and gamma FAQ here: http://www.poynton.com/

6 comentarios

:)) I am more confused now! let me put the part of the article here:
In my opinion, the authors don't really have a good understanding of color science. Of course neither do I - after 37 years of working in image processing and color science I feel that what I know compared to what there is to know could fit on a gnat's eyelash. But of course any professor will tell you the same thing - the more you learn and know, the more you realize how little you really know. Even the top color scientist in the world admit it's a complicated subject that they still don't fully understand and that's why there's constant research in the field.
First of all (though a minor point), the Color Checker chart is sold by Edmund Scientific just as lots of other stores sell it, but the manufacturer is x-rite. And x-rite (not Edmunds) supplies the colormetric values. Those values are given on the x-rite site here: http://xritephoto.com/documents/literature/en/ColorData-1p_EN.pdf You can see that the RGB values are "sRGB" values. Now, from the Wikipedia page I gave you, it says that sRGB has a gamma: "Unlike most other RGB color spaces, the sRGB gamma cannot be expressed as a single numerical value. The overall gamma is approximately 2.2, consisting of a linear (gamma 1.0) section near black, and a non-linear section elsewhere involving a 2.4 exponent and a gamma (slope of log output versus log input) changing from 1.0 through about 2.3."
One thing the authors said is true, that most cameras have a gamma. However CCD sensors are linear and if they were using a good camera, they could turn off the gamma to get the raw, linear CCD values, not gamma-altered ones. But they didn't , they're either using a cheap camera with no control over gamma, or they chose to use the non-linear gamma-altered RGB values because it gives a more pleasing image to look at on a monitor even though it's not the best for image analysis (linear RGB would be).
But look at their equation. They're assuming that they if they put in linear reference RGB values, and apply that cross-channel linear equation that they'll be able to get the gamma altered values (which are the values that they get from their camera). So they have this model that they get from linear least squares to get [a] and [k] coefficients. Then they're trying to say that that model is really a gamma equation and so use non-linear least squares to estimate gamma (the 3 gammas). But this whole process assumes that the input reference RGB values are linear. But they're not - they're the sRGB values that they got from the chart spec sheet, which are non-linear - in fact they even already have a gamma applied. So you're mapping reference gamma-altered values into estimates of your gamma-altered values. Assuming the camera has a gamma close to 2.4 then this tranform will be pretty linear and won't change the values much. Besides, I don't even think this is what you want even if they did start with linear values. You don't really want to know how to input reference values and get your values (of what good is that to you?) - you want the inverse of that . What you'd want is how to input your gamma-altered values into some transform and get estimates of linear reference RGB values. You want to know how to "fix" your values. Right? Don't you want to know how to fix your messed up values?
So now we get back to what I recommended above, and I've run this (in person) past 6 of the world's top color scientists who all concur. By the way, one of them Professor Stephen Westland literally wrote the book on color science ("Computational Color Science using MATLAB") and has a color toolbox for MATLAB available on the File Exchange: http://www.mathworks.com/matlabcentral/fileexchange/40640-computational-colour-science-using-matlab-2e
You can either map your actual RGB into sRGB using an RGB-to-RGB transform and then use analytical formulas given here for going from sRGB to XYZ and from XYZ to LAB, or just go directly from your actual RGB to XYZ, skipping the RGB-to-sRGB step altogether. We recommend the latter approach.
Chances are you don't want to do your calibrated color imaging in sRGB space anyway. It's hard to segment most things in RGB space whereas it's much easier in an alternate colorspace like HSV or LCH or LAB. Those color spaces have the advantage that they're standards. The Color checker chart has intrinsic values for LAB for each chip, and they don't depend on illumination level, but if you take a picture of that chart with your camera and get one set of RGB values then take another picture at half the exposure time, your RGB values will be different. You can get almost whatever RGB values you want. So now you'd have to transform your arbitrary, exposure-dependent RGB values into "standard" sRGB so you can do color segmentation in sRGB space. But I'm saying no , don't do that. If you're going to transform your RGB values (which you do), then go into calibrated LAB colorspace where you can finish your segmentation easier. And if you do that, it doesn't matter what the gamma is since it's already taken into account by the RGB-to-XYZ transform that you derive.
So, ta-da, there you go. Sorry, I know I've probably confused you even more, and anyone else who attempted to follow me, but color is a complicated subject.
+2 !
I don’t do photomicroscopy (never have, so I have no personal experience), but if I were asked to to help design a study using routine histological staining (such as hematoxylin-eosin, fluorsecent-tagging, and others), I now know to first choose a high-quality linear CCD camera and switch off the gamma.
Three questions (that I can think of) remain:
1. What would I want to specify about the colour capabilities of the camera? (In particular, what would I want it to be able to produce, such as RGB, HSV, LAB and others?)
2. What would I then choose as the best colour analysis techniques so that I could get the best colour definitions and colour resolution? (I know spatial resolution is a different subject entirely.) This would be prior to segmentation and other analyses, but with them obviously in mind.
3. What would be the best way to set up, calibrate, and test the imaging system before collecting actual data?
I’m asking this rhetorically (my ignorance is obvious), but since it’s best to design the details of a study (including the statistical design) before collecting the data rather than after, it is an important consideration.
thanks a lot for your really good advice. of course there is no doubt in your knowledge and experience. I really accept skipping such an algorithm and use your suggested guidelines. but i have to implement what the article said and then i fix the article's problem and propose my own method. because i must compare article's result and my own result.
thanks a lot again and again and again for your concern.
Can you include Image Analyst’s approach as part of your method? It would seem to me that correcting misconceptions about the appropriate instrumentation and analysis would go far in advancing photomicroscopic diagnosis.
I speak from some experience. The group I was with was one of the first two (in the early 1990s, the other was Pfürtscheller’s group) to discover that it was possible to determine the task a person was performing by analysing the person’s EEG activation patterns. (In our group, it was my idea!) Don’t be reticent in advancing new approaches.
Well go ahead, be my guest. But I warn you that the colors they use in their paper and claim to be the official RGB values of the chips are not what is supplied. Just look at the link I gave you and see for example that the official yellow is (231,199, 31), but in their paper they give (255,217,0). Hmmmmm.... Makes you wonder where they got their values. Perhaps they are not the sRGB values but they undid the sRGB transfer function to linearize it - if they did so, they didn't mention it.
Anyway, have a crack at their Appendix A if you want. What you'll end up with is some altered RGB image different than your original one. But then what? You're still going to have to do color classification or segmentation. Anyway, good luck with this learning adventure. I hope by going through it you will start to realize what I said. Don't feel bad - it's very complicated, I know from experience of learning it myself and teaching it to hundreds of students.

Iniciar sesión para comentar.

Image Analyst
Image Analyst el 30 de Dic. de 2014
As you might guess from my icon to the left, I do color calibrated image analysis all the time . You can do color standarization, where you match your colors to some "gold standard" image (RGB-to-RGB conversion). Or you can do calibrated color analysis where you convert your colors to the CIE LAB values (RGB-to-LAB conversion). Or you can do both if you need to (which is only required under certain situations). RGB-to-LAB is really best because you're mapping the RGB to known standards whereas with RGB-to-RGB you're just mapping the RGB to some other RGB which was arbitrary to begin with. If you don't understand, ask.
If you want the gamma you can get that, but why do you want the gamma? Just to characterize the camera? Or do you really want to measure something in the image, like color, in which case you don't really need the gamma?

1 comentario

Afsaneh
Afsaneh el 31 de Dic. de 2014
Editada: Afsaneh el 31 de Dic. de 2014
i am working on cancer detection and i have to have pictures with exact RGB values. so gamma Correction is important as a pre-processing step.

Iniciar sesión para comentar.

Preguntada:

el 30 de Dic. de 2014

Comentada:

el 1 de En. de 2015

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by