1. ## Gamma Correction

I have read the Understanding Gamma Correction article in Cambridgeincolour tutorials and other similar articles but I am slightly confused.

I understand the principle and benefits of encoding the image in a colour space eg Adobe 1998 as it encodes the tones to a gamma of 1/2.2 thereby pushing tones down to the shadows from the highlights (therefore representing human vision,and creating a perceptually uniform image,assisting in using curves,etc).

But,once you view this image on a monitor calibrated to a gamma of 2.2,it cancels out the benefit of the colour space recording,the system gamma is 1.0 and you are back to viewing an image with a linear tonal relationship .ie. half the tones in the top exposure zone,etc.

So you are losing the benefit. Yo will have a small number of tones in the shadows,you will see banding if you use significant curves,etc.

Am I missing something?

Ian.

2. ## Re: Gamma Correction

Hi Ian, I think you've got it mostly right, but perhaps the confusion is just with the implications. The net result of any optical system (camera->image->monitor) should have a gamma that cancels out to 1.0 overall -- if faithful reproduction of the original scene is the goal. The banding/posterization benefit still holds though, because the bits were still *stored* non-linearly (even if the end result has a gamma of 1.0). In other words, even though the image is displayed with a linear gamma, the bits are still perceptually (not linearly) distributed.

Clear as mud? This is probably one of the most confusing concepts in digital imaging, but thankfully, it's not one we have to worry about anywhere near as much as we used to.

3. ## Re: Gamma Correction

I am struggling with the same conclusion as Ian. I have poked round the net quite a bit and
seemed to have missed the answer also. How can gamma 1 (linearity) representing
the orginal scene be maintained if the tonal values have been shifted in a non linear fashion
to equal our gammified vision ? The tutorial here does not seem to alude to how you do not end up
back where you started (ie)( a linear what the camera sees image). Is the key in the reference to
gamma = 1 (ie)( the orginal scene as we saw) which is the result to the two gamma corrections not
cancelling each other out but equalling a net effect of Gamma 1 with tonal shifting in tact ?

4. ## Re: Gamma Correction

I can try to reduce a degree of confusion here... Because I reduced it for myself just recently, by reading this tutorial: Understanding Gamma Correction
Originally Posted by ian moore
... pushing tones down to the shadows from the highlights (therefore representing human vision,and creating a perceptually uniform image,assisting in using curves,etc).
I think the above statement is the source of confusion, actually. Gamma transformation is not about pushing tones up or down. It's about pushing bits [representing the tones] up or down. Gamma transfer is a technique to preserve the tones as much as possible!.. And whole idea has to do with limited number of bits.

Side note: if we were to use 32 bits to represent each color value - we would not need gamma at all

And because the number of bits we'd like to use is small (say - 8), we want to use them as efficiently as possible. What that means, is that we want to use more bits to represent dark tones and (naturally) less bits to represent light tones. (The reason to want that is that our eyes can distinguish variations in dark tones better than in light tones, as we know). To put it other way - we want to be more careful (more precise) with dark tones, than with light tones.

And with all that in mind: gamma transfer is not intended to alter the image in any way. It only alters the representation of different tones inside the computer. HOW it alters the representation (just the technique of how to convert 256 gradations of color into 256 different numbers represented by 8 bits) is described by gamma values (1/2.2, 2.2 etc).

And, obviously, IF the transfer was applied, we want the reverse transfer to be applied, before the image is shown on a monitor or in print. OK, may be it's not so obvious. Let's see the example.
Take a pixel, recorded by the sensor as about half-way (50%) between black and white, i.e. value 128 [on a scale from 0 to 255].
In RAW (discarding less significant bits) I'll have number 128 actually recorded.
In print (or on the monitor) I'll want, again, 128. Agreed?
BUT - when I start processing it in the computer, I'll apply gamma first, and my pixel will be converted into the value, which is roughly 75% between 0 and 255, something like 192 (see first graph here).

Side note: Notice how we used 75% of all available 8-bit values to represent darker 50% of all luminance gradations. By looking at the same graph we can also see how 50% of all 8-bit values represent darker 20% (or so) of all luminance gradations, etc.

Now, this value - 192 - will be carried all throughout my [past-RAW] post-processing: JPEG encoding/decoding, storing in the disk/card, processing in PS or whatever, etc, etc... All the way, till it needs to be shown back to me - on a printer or a monitor. And that's when it needs to be converted back to 128 (as we agreed above ) [through "reverse" gamma transformation].

I hope this clarifies some things to some people...

5. ## Re: Gamma Correction

Valdmir
Your response is very helpful but here is where I get lost and what follows
is totally for illustration only.

1. Without a camera humans have a gammified vison of a scene (nonlinear)?
2. Put a camera between our eyes and the same scene and the camera captures
a raw linear depiction of the scene and is not what we would see or is it?.
3. If we display the converted raw digital capture (no gamma compression applied via colour space)
back to ourselves via the LCD on the back of the camera (now digital to analog converted and no
gamma expansion applied by the camera) our gammified view provided by our eyes should make
the scene look ok?
4. The only gamma involved in this totally hypothetical procedure is our vision?

I have a another question or two perhaps but I better stop there if the hypothetical above is incorrect.
Any help much appreciated and thanks for stepping up.

Tim

6. ## Re: Gamma Correction

1. Without a camera humans have a gammified vison of a scene (nonlinear)?
Yes

2. Put a camera between our eyes and the same scene and the camera captures
a raw linear depiction of the scene and is not what we would see or is it?.
"The camera captures a raw linear depiction of the scene" and it IS what we would see.

3. If we display the converted raw digital capture (no gamma compression applied via colour space)
back to ourselves via the LCD on the back of the camera (now digital to analog converted and no
gamma expansion applied by the camera) our gammified view provided by our eyes should make
the scene look ok?
One thing that's good to know about cameras is that you do not see RAW image on your LCD (or in the histogram for that matter). What you see in LCD is gamma-transformed (1/2.2), JPEG encoded, then JPEG decoded, then gamma-transformed (2.2) image. But(!) IF LCD's were to display RAW directly, AND they had linear gamma, then - yes, what you say would be correct.

4. The only gamma involved in this totally hypothetical procedure is our vision?
Yes

I have a another question or two perhaps but I better stop there if the hypothetical above is incorrect.
Any help much appreciated and thanks for stepping up.
You are welcome. This is a good discussion. It bothers me a little, that you never mentioned "bits" in your questions. ...while it's all about bits, as I mentioned above. You don't like bits?

7. ## Re: Gamma Correction

Correct me if I'm wrong, but isn't it more about how the bits are interpreted, rather than actually pushing them around?
As in I have a single pixel image, and that pixel's bit value is 128, and when gamma is applied, that value 128 hasn't changed, but when the image is rendered, it's interpreted as something closer to 190-something, which would "stretch" any values below 128 across that 0-190something range?

8. ## Re: Gamma Correction

Originally Posted by blakemcguire
Correct me if I'm wrong, but isn't it more about how the bits are interpreted, rather than actually pushing them around?
As in I have a single pixel image, and that pixel's bit value is 128, and when gamma is applied, that value 128 hasn't changed, but when the image is rendered, it's interpreted as something closer to 190-something, which would "stretch" any values below 128 across that 0-190something range?
Correcting...

"Pushing bits around" is not exactly what happens, of course... It's more like pushing 8-bit (or 16-bit) values around to encode tones more efficiently.

If I have a single pixel in RAW file, and that pixel's value is 128, then when gamma is applied, that value 128 changes and becomes 190-something (pixel in a file doesn't change, but it you are going to produce JPEG out of that - you'll encode 190-something) . When the image is rendered (i.e. displayed or printed), it's known that 190-something represents initial value of 128, and that's what printed or displayed.

On the other hand - if I have a single pixel in JPEG file, and that pixel's value is 128, then it's assumed that it's already gamma-encoded, and what will be displayed/printed is 50-something (20% of the distance from 0 to 255)

9. ## Re: Gamma Correction

Thanks for replying. So if you will humour me a bit I would like to carry on a 'bit'
if you don't mind. I thought I better mention the word to put your mind at ease.
My first attempt was just to confirm that with gamma removed totally out of the
system and putting a camera between our eyes and the scene and displaying the result
(say a jpeg with no gamma) on the back of the camera then our 'gammified' eyes would see the scene
correctly. I hope I got that right. If not please say so. Now on to gamma.
Is the reason for the gamma 1/2.2 application at the input stage due to all
devices that have digital to anlog components (CRT,LCD, Cell phones, etc) (and printers) having gamma 2.2
as part of their operating environment ? I understand the history of the implementation
of gamma correction (signal quality issue) but it seems like it is all hardware that is driving all this gamma thrashing.
Perhaps a carryover from the early broadcast days and the coincidence of the display characteritics
of CRTs used at that time? Also if we apply Gamma 2.2 does it not undo all the luminance bit tweaking generated by
the application of gamma 1/2.2 at the input stage ? If so then our vision must be applying some sort of gamma
correction (including the bit tweaking effect of gamma 1/2.2) to the image on the back of the camera to compensate for the now linear image being displayed.
I know some are probably saying who cares just get out a shoot and they probably have a point.
Sure hope there are some others out there struggling with this and might get some benefit out of this.

Any help greatly appreciated.

Thks

Tim

10. ## Re: Gamma Correction

Originally Posted by Timv
I would like to carry on a 'bit' if you don't mind. I thought I better mention the word to put your mind at ease.
Thank you, sir! Appreciate your concern, sir...

...with gamma removed totally out of the system ... our 'gammified' eyes would see the scene
correctly. I hope I got that right. If not please say so.
What are you saying, mister?? Are you proposing removing gamma and not using it at all?? Are you crazy!?

Just kidding...

Seriously, though - for starters - like you mention, it's so embedded into everything already, that we should really take it as a "given".

But on the other hand - I think it has total right to exist, and it should have been invented now if it wasn't invented long time ago. The reason is that while 8 bits is a very convenient quantity to store the image, it's quite insufficient for any image manipulations, and since gamma makes for better utilization of those 8 bits - it's a really good idea to have it.

And for the question "why 2.2 ?" - I think I read somewhere that it's close to the actual gamma of a statistically average human eye. That's why. The fact that CRT's gamma is close to that I think is just accidental (or - serendipitous, if you will).

The conversion itself is a really cheap and easy operation for SW or HW - it's just a 256-row table look-up. (They don't actually take numbers to the power of 1/2.2 etc.) So, it's easy for the computers to do, it's just hard for people to understand, but so are sooo many other things in today's technology, which we don't even try to understand...

And for the idea "who cares just get out a shoot" I say - who cares about shooting photos, when we can have so much fun discussing gamma!

11. ## Re: Gamma Correction

See, I thought I had it more or less understood :P

Oh well, at least it still makes sense.

12. ## Re: Gamma Correction

Thks for the patience Vladimir. I will spend a little more time reading
your comments and I am sure it will start to take hold.

Cheers

Tim

13. ## Re: Gamma Correction

I agree with what Vladimir and others are saying.

The aim of the camera/computer/display end-to-end system is to reproduce what we see (before we start considering artistic adjustment of the image). So the end-to-end gamma should be 1 (i.e. linear). Otherwise, what's displayed won't look the same as the original scene.

However, because our eyes aren't linear, if you have linear encoding, the step change between adjacent levels "looks" bigger to our eyes at low levels than at high levels. This doesn't matter with 16 bit encoding as even at low levels the steps are too small to see. So Lightroom uses 16 bit linear encoding internally, for example.

With 8 bit encoding (e.g. jpeg), if linear encoding is used, we can see the steps at low level. Gamma encoding means the step size is made smaller at low levels (and bigger at high levels), but to our eyes the steps are perceptually more even, and we don't see any steps. The gamma correction needs to be removed by the time the image data is output from the display or the image won't look the same as the original scene.

Quite separately - many monitors are non-linear (light output is not proportional to signal input). A monitor profile can contain gamma correction to counter this. When you calibrate a monitor (either with a colorimeter or by eye) the effect is to create a gamma correction equal and opposite to the non linearity in the monitor.

14. ## Re: Gamma Correction

Validmir

I re read everything last night and wish to make a comment and if you would share your thoughts that would be great.

I have included the following quote from the Gamma explanation on this site which I found a little confusing when first trying to
understand gamma and it is as follows: "A gamma encoded image has to have "gamma correction applied when it is viewed ...". For ease of understanding if "encoded' is being used as 1/2 of the flow then why not use "decoded" to describe the other side instead of the generic term "gamma correction" ? Decoded might be a little more specific as in gamma 2.2.
The whole gamma concept seems to try to apply a certain environment going from analog to digital and then offset or
reverse this environment when going from digital to anlog for our non linear view on what ever device. But here is the part I
have missed ( I hope )..they are not mirror procedures because gamma correction 1/2.2 (encoding ) also includes a "redistribution of tonal levels closer to how our eyes perceive them" and gamma 2.2 does not undo this adjustment at the point of eventual display. If it did then we would have gained nothing. If this in fact what happens I don't recall ever reading
about this subtle but perhaps (if true) important distinction or maybe it has been described such that I have not picked
up on it. Maybe it is obvious but I try not to make assumptions. If this is not a possibility than I am beyond help.

Thks

Tim

15. ## Re: Gamma Correction

But here is the part I have missed ( I hope )..they are not mirror procedures because gamma correction 1/2.2 (encoding ) also includes a "redistribution of tonal levels closer to how our eyes perceive them" and gamma 2.2 does not undo this adjustment at the point of eventual display.
Yes, it does!

The whole point is to undo the adjustment at the point of display (or before) so that end-to-end the result is linear. You might like to see my post earlier this afternoon.

The purpose of gamma encoding is absolutely not to make the image look different. We want the image to look the same as the original scene. It is to make use of more optimum encoding to avoid digitisation effects (where you can see the steps between adjacent levels) and possibly to reduce perceptual quantisation noise.

We may also need some form of gamma correction in the display driver (typically by means of a monitor profile) to correct non-linearity in the display.

But the end-to-end result - from relative numbers of photons into the camera to photons out of the monitor - should be linear. If twice the number of photons hit the sensor from one part of an image compared to another part, then twice the number of photons should be emitted by the monitor for the first part of the image compared to the second part.

No overall non-linearity, gamma correction or redistribution in the function of photons in to photons out.

Forgive me if I've misunderstood your point

16. ## Re: Gamma Correction

Thks for the response Simon. Are you and McQ's post above saying the same thing ?

17. ## Re: Gamma Correction

Originally Posted by Timv
Thks for the response Simon. Are you and McQ's post above saying the same thing ?
Yes, I think so.

18. ## Re: Gamma Correction

Originally Posted by Timv
Thks for the response Simon. Are you and McQ's post above saying the same thing ?
Yep, and I am saying the same thing too.
I was just trying to make the details easier to understand. Still not sure if I've succeeded.

19. ## Re: Gamma Correction

vladimir, I should have said: I think I'm saying the same as you too!

20. ## Re: Gamma Correction

I've only just caught up with this thread but could i add some comment that may or may not help clarify things.

Many moons ago I was involved in the transmission of analogue television signals over long distance trunk routes.

With this technology, a concept of pre-emphasis and de-emphasis was used to improve signal to noise performance. It involved boosting the high frequency components of the TV signal before transmiison and then applying an opposite de-emhasis curve at the other end.

This compensated for the fact that the transmission path and equipment added higher levels of noise at the higher frequencies.

I think you can draw an analogy between this and the concept of gamma encoding. It's not analogue noise but digital "noise" or more correctly "quantisation noise".

By utilising a proportionately higher number of the available bit levels for the darker end where our eyes are more sensitive to level difference, the quantisation noise is reduced before jpeg compression. As a result, when viewing the image on screen, the image should be a bit sharper, despite the fact that there will be some additional quantisation noise introduced in the gamma correction.

As has been mentioned above, if 16 or 32 bits were used rather than 8, then gamma encoding would be of little relevance.

Just my thoughts on the subject for what they are worth.

Cheers Dave

Page 1 of 2 12 Last

#### Posting Permissions

• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts
•