ajohnw, thanks for the link to Wikipedia. That article contains a sentence that isn't quite right, and feeds a lot of the misunderstanding:
(My emphasis). Some people latch on only to the first part of that sentence, which I've highlighted. It's wrong, anyway. Gamma encoding is not required, but it's useful when encoding in 8 bits or less. It's largely irrelevant in 16 bits.Gamma encoding of images is required to compensate for properties of human vision, hence to maximize the use of the bits or bandwidth relative to how humans perceive light and color.
Gamma encoding does not "compensate for the properties of human vision", it simply encodes images in a way that allows fewer bits to be used for a given signal to noise ratio. It's a piece of engineering expediency. A tone curve is applied before encoding, and that tone curve has to be removed (by applying the opposite tone function) before display.
A second purpose for using a tone curve (not usually exactly a gamma function) is to compensate for non-linear response in output devices (monitors).
These two unrelated purposes get confused as the tone curve applied before encoding is usually a gamma curve (though not with sRGB), and the response of CRT monitors is (quite coincidentally) often approximately a gamma curve.
Both these functions are techy, engineering functions. Ideally they would be completely hidden from photographers, and we shouldn't even need to be aware of them. In fact, with colour management, we don't need to know anything about it. Colour management deals with it, and we can forget all about it.
PS - I've just edited that sentence on the Wikipedia page, so it now reads:
Let's see if that change sticks. I think it's more accurate.Gamma encoding of images is used to optimise the usage of bits when encoding an image, or bandwidth used to transport an image, by taking advantage of the non-linear manner in which humans perceive light and color.

Helpful Posts:
Reply With Quote
No mention of max brightness as that is supposed to look after itself hence the max output in plots is always 1. The brightness control is a separate entity set to suit. What it's also about is obtaining good tonal characteristics when images are viewed so I think gamma would be applied what ever bit depth was used within reason. It has also been used to "improve" images.
With 16bit channels though they might be a bit more like real stops.
). Please see attached gif.
I stated: "the human eye can distinguish a difference in Lightness (between two adjacent areas) only when the differences is 1% or more." The correct statement (from accepted authorities) refers to a difference in Luminance (the physical intensity) of 1% as the threshold. As a result the data highlighted in the spreadsheet (gif) is incorrect. The actual point occurs in the progression of normalized Luminance not included in this data. Nevertheless the important point is that such a threshold does exist and I will revisit that when I come to the translation of sensor data to digital (file) data.
