1. ## Re: Gamma revisited

Originally Posted by TonyW
Mike, you say that the Lightness is an exponential function of Luminance (intensity).
I thought Poynton's gammaFAQ made it quite clear:

Human vision has a nonlinear perceptual response to brightness: a source having a luminance only 18% of a reference luminance appears about half as bright. The perceptual response to luminance is called Lightness and is defined by the CIE as a modified cube root of luminance:

L* = 116(Y/Yn)^(1/3)-16 for Y/Yn > 0.008856

where Yn is the luminance of the white reference.
Since we're talking CIE here, I wonder why Lightness is apparently being redefined below? I may have missed something - unusual for me

It is more like a logarithmic function than exponential but in any case why do you then use the power function relationship instead. Are you just bowing to convention? I know the use of the power function is the usual one but the power chosen is, from what I have seen quoted, usually the gamma of 1/2.2. Why have you chosen a power of 1/3? Do we know why 1/2.2 is usually used?

A logarithmic relationship is actually quite common. When we talk about the number of stops of aperture, that is actually a logarithmic function of aperture. If we go from f/2 to f/4, for example we are decreasing the amount of light by a factor of 4 but we count it as two stops. Then if we go 2 stops further we get an aperture of f/8 and reduce the light again by factor of 4.

As a pedant from way back, I would say that "oft quoted" is correct without the apostrophe and without the hyphen, although I realise that the hyphen is commonly used. "Oft" is simply an archaic synonym for "often".

2. ## Re: Gamma revisited

Ted, I just read the gammaFAQ. It gives the definition of lightness in terms of the 1/3 power as you point out. If this is meant to represent the way human react to luminance, it is inconsistent to say that the perceived lightness is an exponential function of Luminance.

My comment about the relation between stops and aperture was irrelevant I must admit. It certainly was not meant to be a different definition of Lightness.

What I understand from Poynton's description is that gamma correction is not to do with how light intensity is perceived but how it is related to the behaviour of display devices, which is not quite how i understood it from what I had read here. I am still looking forward to a good clear explanation but I am rapidly coming to the conclusion that if I ignore all this discussion I can still take photographs and view them and get them printed well enough. My serious problems are with aesthetics rather than technicalities.

3. ## Re: Gamma revisited

Originally Posted by TonyW
Mike, you say that the Lightness is an exponential function of Luminance (intensity). It is more like a logarithmic function than exponential but in any case why do you then use the power function relationship instead. Are you just bowing to convention? I know the use of the power function is the usual one but the power chosen is, from what I have seen quoted, usually the gamma of 1/2.2. Why have you chosen a power of 1/3? Do we know why 1/2.2 is usually used?

As a pedant from way back, I would say that "oft quoted" is correct without the apostrophe and without the hyphen, although I realise that the hyphen is commonly used. "Oft" is simply an archaic synonym for "often".
Tony, you are quite right. It should properly be called a logarithmic function. Euler's number is nowhere to be found. The exponent (1/3) as well as the rest of the equation for Lightness (L star) is lifted directly from the Poynton paper which in turn references the CIE.

If you read a bit further in the Poynton paper you will find:
Through an amazing coincidence, vision’s response to intensity is effectively the inverse of a CRT’s nonlinearity. If you apply a transfer function to code a signal to take advantage of the properties of lightness perception – a function similar to the L* function – the coding will be inverted by a CRT.
I love xPats subscript: "Pedantry is not a felony.

4. ## Re: Gamma revisited

Part #2

In this installment I would like to review the mechanics of getting data from the camera's sensor to a digital (raw) file. Please consider that what I write is based on my understanding of the process derived from what I have read (and hopefully understood). It is not meant to be scientifically correct in every detail. That notwithstanding, if there are any serious flaws in my reasoning, I shall appreciate any corrections you may offer.

The sensor is composed of individual pixels (or wells) which, upon exposure to light, collect photons. The number of photons is a function of the light intensity (luminance) reaching each pixel. This number of photons is first translated to a voltage which is then passed to an analog to digital converter.

If one considers the various voltages at each of the pixels in an image collectively they range, on an analog scale, from "zero" (actually the lowest voltage above the noise level) to the maximum that can be generated by a full pixel (well). Since this is analog data, the number of discrete voltages that can be detected and recorded is limited only by the sensitivity (precision) of the voltage detector circuitry. I assume this number is very large (i.e. the precision quite high) based on the fact that most cameras use a 12 bit (or 14 bit) pipeline for quantization of the analog data for digital storage.

Quantization is the process of taking values that are continuous (analog voltages) and condensing the data to an integral number of units (compartments or bins) each of which contains a specified range of analog values. This process is essential to storage of analog values in a digital file. It is exactly the same process used to produce a histogram. An example follows:

Suppose we have a sensor whose pixels can generate (all) voltages in the range of Vmin to Vmax. In this example we are limited to 8 bits per pixel to store the data. That means we have a total of 256 discrete (digital integer) values which can be written to the file. Each of these values can be thought of as an interval (bin) which encompasses a range of voltages and condenses that range to a single (8 bit) integer.

Further (for this example only) the size (width) of all voltage intervals must be equal. This is equivalent to stating that the analog to digital conversion must be linear (which, in fact, is the case for most digital cameras). To elaborate: each interval will have a width of Vmax/255. Each pixel having a voltage between Vmin and Vmax/255 is assigned a digital value of zero (bin #0). All voltages greater than Vmax/255 but less than or equal to 2 x (Vmax/255) are assigned a digital value of 1 (bin #1). And so on up to voltages greater than 254 x (Vmax/255) but less than or equal to Vmax which are assigned the digital value of 255 (bin #255).

All of this is illustrated on Sheet #2 of the linked spreadsheet.

https://www.dropbox.com/s/vf6a948dsn...%231.xlsx?dl=0

Examine the yellow highlighted intervals in columns A & B. This tells us that all pixels generating a voltage greater than 0.0039 Vmax but less than or equal to 0.0078 Vmax are assigned the digital value 1 for storage. Now if we assume a reasonable degree of precision of the voltage detector (at least 0.0001 Vmax), then we are pooling several easily distinguishable analog voltages (say 0.0040, 0.0041, ....0.0078 Vmax) and their associated Luminances, all together with the same 8 bit digital value. To belabor the point: pixel A with a voltage of 0.004 Vmax would be indistinguishable from pixel B with a voltage of 0.006 Vmax. once their data had been digitally encoded. Plainly we are discarding valuable information here.

Let me return to a point I tried (not exactly correctly) to make in Part #1. It is generally accepted that the human eye can distinguish a difference in Luminance (between two adjacent areas) only when the differences is 1% or more.
To emphasize this I show in column C the percent increase in voltage (luminance) at each interval relative to the immediately preceeding interval. Note that this difference does not drop below 1% until a digital value of 106 is reached. Thereafter all the differences between digital values are less than 1%. Here we are collecting values of not much value.

Next chapter, gamma and more bits to the rescue.

5. ## Re: Gamma revisited

I think you have that correct Mike but to clarify a voltage in a pixel is quantified digitally to some precision neglecting any sources of inaccuracy - a perfect a to d in other words. Where specs are available all sources of error will be specified separately. +/- 1 bit is often there as one source of errors. Then there are linearity and quantifying errors.

As far as gamma discussions are concerned grey scale should be fine. Colour adds other complications even on Ted's foveon sensor but a grey scale image would be processed in much the same way as far as tone remapping goes.

These values are then scaled to account for the colour of the filter over them. The camera profile does this also accounting for the apparent colour temperature when the shot was taken. They may also be "scaled" prior to this happening via a curve of arbitrary shape, this seems to be a Nikon thing. They are then displayed at some maximum luminance level on a monitor for processing but the bit precision from the basic data is still retained. The excess precision is effectively fractional bits of the display bit depth. They might also be converted to a jpg in the camera. This involves passing the values through a tone curve that remaps them. The same thing happens when images are displayed via a raw converter. This area touches on why I mentioned some luminance level. There isn't much info about on this. Adobe for instance state tone mapping is used and there is no need for users to worry about the detail. They mean tone mapping literally - a tone is mapped to another. Typically cameras of late will map 9 stops into a jpg by compressing the dark and highlight end tonal range IF NEEDED as some shots do not need that range so would sit in the linear part of the curve used between the compressed areas or go part into them. This is how raw converters function as well. I do know of one that has an option not to do this.The images appear to be very dark but the bit precision in them is still present so they can be adjusted manually. This seems to be how PP packages keep the data but they also manage the colour gamuts that are used. Often the colour gamut used down below is much larger than the one being displayed.

Gamma is added when the image is displayed. It seems there is some 32bit floating point image format where it's encoded directly into the image. Mentioned in the wiki link if I remember correctly. This is probably because the format has sufficient accuracy to do this well where as 8 bit may not - hence higher end monitors processing the image at a greater bit depth to avoid rounding errors. Another interesting area as an analogue display can do that without much of a problem as it doesn't have step values. To compete digital displays should be capable of processing with at least 10 bit precision for 8 out but it seems few in common use do.

John
-

6. ## Re: Gamma revisited

The light response of the human eye is a power function and not a logarithmic one. If some one wants a nice anecdotal run down of how logs became associated try this.

http://www.skyandtelescope.com/astro...nitude-system/

Somehow I can't see the real behaviour being read across to a EV's as film and sensors are a different kettle of fish. There seems to be 2 variants of the eye's power curve as well. Doubt if anyone wants to read the lot, they might get bored before they got to this bit. The laws can be found. The word conjecture may crop up as no one so far can measure what each person is actually seeing. They can only compare peoples views on the subject when they are actually looking at the aspect being questioned.

It is important to note that the scale is logarithmic: the relative brightness of two objects is determined by the difference of their magnitudes. For example, a difference of 3.2 means that one object is about 19 times as bright as the other, because Pogson's Ratio raised to the power 3.2 is approximately 19.05. A common misconception is that the logarithmic nature of the scale is because the human eye itself has a logarithmic response. In Pogson's time this was thought to be true (see Weber-Fechner law), but it is now believed that the response is a power law (see Stevens' power law).[11]
-
John
-

7. ## Re: Gamma revisited

Originally Posted by mikesan
Part #2
Next chapter, gamma and more bits to the rescue.
Sorry, Mike, I'm going to take a general exception to the terminology used in the description from sensor to file. Firstly, it is clearly not written by an electronics engineer nor by an optical physicist. And I mean no disrespect to you or your worthy task. I could go through the whole and make suggested changes but that would be best not done here because of "circling sharks" PM me if you would like to correspond.

As an example. I'll comment on the following:

The sensor is composed of individual pixels (or wells) which, upon exposure to light, collect photons.[1]

The number of photons is a function of the light intensity (luminance) reaching each pixel.[2]

This number of photons is first translated to a voltage which is then passed to an analog to digital converter.[3]
[1] In the literature, "pixel" is oft (!) reserved for graphic output devices - "sensel" is more often used for sensor elements - and "well" is not an alternative for "pixel" in any case. The so-called well is only part of a photodiode, properly called the depletion zone, which is the area that collects photons near the p/n junction.

[3] The captured photons are first converted (not "translated") to electrons, not a voltage. The electrons move in the semiconductor material due to bias and moving electrons are in fact an electrical current. Only when this current flows into the capacitance of the p/n junction is a charge created Q=current x time (yep - exposure time). That charge results in a voltage across the capacitor V=Q x C. There's much more, but I'll stop there.

My thought is, if the scope of part II is from the sensor to image file, it might be better to shorten the light-to-voltage bit down to one very general sentence.

As to the description of what I take to be analog-to-digital conversion (it doesn't actually say up front), are you aware that the ADC receives a very high frequency voltage waveform, not a continuous voltage?

You might want to consider using a 12-bit (0-4095) example for the quantization - 8-bit ADCs in digital cameras are quite rare and people might confuse your description for one about color bit depth.

As to what appears to be an attempt to encompass all cameras in one 'pipeline' description: while laudable, that is a might task indeed! Might be better to use "a typical camera" or something.

Let's not mention Sigma . .

8. ## Re: Gamma revisited

I tried to find a page from decent source that elaborated on why a gamma of 2.2. I do know there is one after a fashion as it mentions different ideal values of gamma according to ambient light levels. I can't find it. This paper illustrates a point - ambient surroundings, their reflectance and the levels of light they receive, ? and it mentions a whole lot more. One of the problems with gamma and colour is that it's a very specialised area that has it's own way of expressing things.

http://www.w3.org/Graphics/Color/sRGB.html

Part of the confusion in this is that PC's were in a right mess initially. There is plenty of old info on that site indicating that as well.

As a result of the mess some images can contain their own gamma informations. It still works as well as this page shows. Well it does on my set up

http://www.w3.org/Graphics/PNG/all_seven.html

This one shows the problems with 8bit LUT's Despite this it isn't difficult to calibrate a monitor to very low virtually imperceptible levels of colour error.

http://www.graphics.cornell.edu/~wes...mma/gamma.html

John
-

9. ## Re: Gamma revisited

As Ted says, it's good (if not always easy) to get the details right. I mean absolutly no criticism of Mike nor his goal of creating a technologically sound explanation of gamma and its use in the digital image capture-processing-output workflow.

Photographers don't need to understand much technology to cope with gamma (and colour management more or less obviates the need to get one's hands dirty anyway, as gamma is handled transparently). However, I'm fully with Mike that for a scientist or engineer (I'm one - though not a specialist in optics) it's interesting to get into the technical detail.

The difficulty is that many of the apparently-scientific references on gamma are not quite right, so I reckon you'll probably end up doing quite a bit of reading in order to distinguish the right from the not-quite-right.

Interesting that the Poynton FAQ you referenced earlier is quite old (1998), and so explains output gamma in terms of CRTs. The fact that the (exponential) CRT response was inherently the opposite of the (logarithmic) response of our eyes meant that a logarithmic curve applied during encoding killed two birds with one stone. It matched the perceptual response of our eyes for optimal use of bits when encoding, and roughly preconditioned the signal for output to a CRT. That dual purpose (and the fact that some explanations rather merge the two) leads to some of the major misunderstanding of gamma correction in my (inexpert) view.

10. ## Re: Gamma revisited

Originally Posted by xpatUSA
Sorry, Mike, I'm going to take a general exception to the terminology used in the description from sensor to file. Firstly, it is clearly not written by an electronics engineer nor by an optical physicist. And I mean no disrespect to you or your worthy task. I could go through the whole and make suggested changes but that would be best not done here because of "circling sharks" PM me if you would like to correspond.
Ted, I very much appreciate your interest and remarks. Clearly I am neither an electronics engineer nor an optical physicist. I am thus handicapped by the limited knowledge I have.

[1] In the literature, "pixel" is oft (!) reserved for graphic output devices - "sensel" is more often used for sensor elements - and "well" is not an alternative for "pixel" in any case. The so-called well is only part of a photodiode, properly called the depletion zone, which is the area that collects photons near the p/n junction.

[3] The captured photons are first converted (not "translated") to electrons, not a voltage. The electrons move in the semiconductor material due to bias and moving electrons are in fact an electrical current. Only when this current flows into the capacitance of the p/n junction is a charge created Q=current x time (yep - exposure time). That charge results in a voltage across the capacitor V=Q x C. There's much more, but I'll stop there.
From my perusal of the literature the terms "pixel" and "sensel" are often used interchaneably. In any case it is far above my pay grade to attempt a discription of the various elements of a photodiode or their respective functions. I know I have indulged in a lot of generalization in my narritive but I hope that none have seriously broken the chain of logic which I set out to construct.
Again, I am grateful to you for filling in the details, some of which I was already aware and many of which I have never considered.

My thought is, if the scope of part II is from the sensor to image file, it might be better to shorten the light-to-voltage bit down to one very general sentence.

As to the description of what I take to be analog-to-digital conversion (it doesn't actually say up front), are you aware that the ADC receives a very high frequency voltage waveform, not a continuous voltage?

You might want to consider using a 12-bit (0-4095) example for the quantization - 8-bit ADCs in digital cameras are quite rare and people might confuse your description for one about color bit depth.

As to what appears to be an attempt to encompass all cameras in one 'pipeline' description: while laudable, that is a might task indeed! Might be better to use "a typical camera" or something.

Let's not mention Sigma . .
The basic thrust of Part 2 was to describe and emphasize the process of quantization as it applies to the A/D conversion and how it can affect the loss of information in the transition. I hope that this point gets across. In the next episode I will discuss the effect of bit depth on the quantization process. I hope you will comment.

11. ## Re: Gamma revisited

Originally Posted by mikesan
Ted, I very much appreciate your interest and remarks. Clearly I am neither an electronics engineer nor an optical physicist. I am thus handicapped by the limited knowledge I have.

The basic thrust of Part 2 was to describe and emphasize the process of quantization as it applies to the A/D conversion and how it can affect the loss of information in the transition. I hope that this point gets across. In the next episode I will discuss the effect of bit depth on the quantization process. I hope you will comment.
Yes, I understood the description and it's purpose but, generally in electronics, the process is called "analog to digital conversion" not quantization and the chips themselves are always called "analog to digital converters", not "quantizers". That word may well have a proper place in Quantum Physics, perhaps explaining why a photon whacking a nucleus cannot result in 1.27845 electrons. (You may guess from this that I regard quantization as an ugly word which should kept out of photographic literature at all costs )

Still you did mention that the article is for your benefit, not ours, and simply put up for comment - so if you like it, by all means use it, eh?

12. ## Re: Gamma revisited

Originally Posted by xpatUSA
Yes, I understood the description and it's purpose but, generally in electronics, the process is called "analog to digital conversion" not quantization and the chips themselves are always called "analog to digital converters", not "quantizers". That word may well have a proper place in Quantum Physics, perhaps explaining why a photon whacking a nucleus cannot result in 1.27845 electrons. (You may guess from this that I regard quantization as an ugly word which should kept out of photographic literature at all costs )

Still you did mention that the article is for your benefit, not ours, and simply put up for comment - so if you like it, by all means use it, eh?

I did not mean to suggest the term "quantization" as a replacement for (or being synonymous with) "analog to digital conversion". However (to my mind) quantization is an integral step in understanding A/D conversion. I do not know at what stage it occurs, between reading the voltages from the chip to the output of the A/D converter. (does it occur on the chip itself?) Perhaps you can elaborate on that and what word you would prefer to describe the process.

I will say that I conceptualized the process long before I had a good term to describe it. Then, at some point, somewhere in cyberspace, I ran across the following:

Quantization (often called quantizing, a perfectly-good synonym) refers to the process of taking a quantity that is continuous (that is, can take on any value we can imagine, albeit perhaps over a finite range) and restating it as if it were composed of an integral number of some unit (that unit being considered a quantum, thus the name).
Sorry I cannot recall the source; with apologies to the author and all Quantum Physicists.

13. ## Re: Gamma revisited

Originally Posted by xpatUSA
Yes, I understood the description and it's purpose but, generally in electronics, the process is called "analog to digital conversion" not quantization and the chips themselves are always called "analog to digital converters", not "quantizers". That word may well have a proper place in Quantum Physics, perhaps explaining why a photon whacking a nucleus cannot result in 1.27845 electrons. (You may guess from this that I regard quantization as an ugly word which should kept out of photographic literature at all costs )

Still you did mention that the article is for your benefit, not ours, and simply put up for comment - so if you like it, by all means use it, eh?
Sorry Ted but I can't agree with you there. In my experience in the world of digital transmission, terms such as quantisation and quantisation noise are commonly used in relation to A/D conversion.

Dave

14. ## Re: Gamma revisited

Originally Posted by mikesan

I did not mean to suggest the term "quantization" as a replacement for (or being synonymous with) "analog to digital conversion". However (to my mind) quantization is an integral step [for me] in understanding A/D conversion.
OK

I do not know at what stage it occurs, between reading the voltages from the chip to the output of the A/D converter. (does it occur on the chip itself?)
If "it" is still quantization per se, I can not say. Perhaps Dave can help with that?

Perhaps you can elaborate on that and what word you would prefer to describe the process.
Stretching my memory a bit but, IIRC, the input voltage is scaled and applied to a comparator. The ADC output is actually converted to an analog voltage and applied to the other comparator input as negative feedback. Successive approximation is applied until the output of the comparator says it's inputs are equal at which point the ADC output is valid.

That is grossly simplified but may still pertain in principle. One of my cameras uses the Analog Devices AD9235 and I've put the data sheet here for your reading pleasure, enjoy.

15. ## Re: Gamma revisited

Originally Posted by xpatUSA
OK

Stretching my memory a bit but, IIRC, the input voltage is scaled and applied to a comparator. The ADC output is actually converted to an analog voltage and applied to the other comparator input as negative feedback. Successive approximation is applied until the output of the comparator says it's inputs are equal at which point the ADC output is valid.

That is grossly simplified but may still pertain in principle. One of my cameras uses the Analog Devices AD9235 and I've put the data sheet here for your reading pleasure, enjoy.
Oversimplified perhaps, but it is still over my head. Thanks anyhow.

16. ## Re: Gamma revisited

Ted you may have a different interpretation of the word quantisation to me but for me, in this context, it is simply the process of dividing up an analogue voltage range into a series of small ranges and assigning a digital value to each range. The number of steps is determined by the bit depth of the A/D converter.

Mike it all happens as part of the operation of the A/D converter. Modern CMOS sensors have the A/D converter built in to the same chip as the sensels. In fact, the latest sensor chips have one A/D converter for each column of sensels to facilitate faster read-out. The outputs from the multiple A/D converters are then multiplexed together into one data stream.

But we don't need to know the intimate details of how A/D converters work ! (And I couldn't help with that anyway)

Dave

17. ## Re: Gamma revisited

I don't think much of this area has anything to do with gamma and too much detail is dangerous. For instance I have a feeling that I am not going to try and check that Ted has described ccd operation. CMOS generally uses a different principle. His sensor uses an entirely different principle. Again irrelevant as far as gamma is concerned. The post was too verbose that is why I posted my comment tongue in cheek going a little to terse on purpose.

ie
I think you have that correct Mike but to clarify a voltage in a pixel is quantified digitally to some precision neglecting any sources of inaccuracy - a perfect a to d in other words. Where specs are available all sources of error will be specified separately. +/- 1 bit is often there as one source of errors. Then there are linearity and quantifying errors.
I was incorrect of me to use a to d as abbreviations can cause people who don't know what they mean problems. If a reader wants to know more about these or how the pixels generate a voltage in response to light they need to research that separately. They will find different approaches can be used. The only thing in this area of importance to gamma is that it's a digital signal, a number so some elaboration on that should be added briefly - an introduction to a subject that will come up later so some understanding is required. This approach will also cover Ted's sensor.

Some comments about a/d errors may be in order but again they don't really have any direct bearing on gamma so all that is needed is a comment about the fact that they will be there. People are inclined to assume a perfect world and in real terms it isn't in many areas.

John
-

18. ## Re: Gamma revisited

As John says, I think the first two parts of Mike's tutorial go into a lot of depth which might be beyond what is strictly essential to an understanding of gamma. I certainly wouldn't discourage Mike from a more wide-ranging view, which could be extremely interesting and valuable.

When Mike gets to the parts more directly about gamma curves, my suggestion would be to make sure the differnet uses of gamma curves in photography are clearly distinguished, which include:
1. To provide perceptually-uniform coding (to minimse the number of bits used for a given noise level)
2. To compensate for the non-linear response of monitors
3. To apply a small contrast boost in terms of a "viewing gamma" to compensate for:
• The lower contrast of the monitor or print, compared to the original scene
• The different viewing conditions of the monitor or print, compared to the original scene

Thanks to John for the link to the sRGB paper (http://www.w3.org/Graphics/Color/sRGB.html) which describes the "viewing gamma" very well.

With modern systems (especially colour-managed systems), the tone curve applied for encoding efficiency, the tone curve applied to compensate for non-linear output device, and any tone curve for a contrast boost are treated as separate processes. Historically, electronics were analogue and each processing step was a major cost, and could introduce additional noise. Even in the early days of digital, extra processing steps were avoided where possible. As a result, with careful choice of gamma, the TV systems (for which gamma curves were probably first used) could deal with all three purposes by the judicious choice of one tone curve. But they are quite separate purposes.

19. ## Re: Gamma revisited

Originally Posted by ajohnw
I don't think much of this area has anything to do with gamma and too much detail is dangerous. For instance I have a feeling that I am not going to try and check that Ted has described ccd operation. CMOS generally uses a different principle. His sensor uses an entirely different principle. Again irrelevant as far as gamma is concerned. The post was too verbose that is why I posted my comment tongue in cheek going a little to terse on purpose.
Off-topic, but just to be clear John, I was not referring to any particular type of sensor. They all use p/n junctions, so my description of light-to-voltage conversion is quite general and applies to CCD, CMOS, and even Foveon layers - they all use entirely the same principle.

If a reader wants to know more about these or how the pixels [sensels] generate a voltage in response to light they need to research that separately. They will find different approaches can be used.
What different approaches, other than photon capture in a depletion zone causing electrons to charge a capacitance, are used in digital cameras?

20. ## Re: Gamma revisited

Maybe I should apologise as I wouldn't want to discourage either. I do have a bit of an attitude about any form of documentation sticking to the point and not straying from it. Comes from having read way to much of it and content compared with word count has deteriorated remarkably over the years. In fact in some areas when buying books for instance the older the better but the going is likely to be tougher.

When people mention analogue and digital there is a tendency to assume digital has to be better. Both have their advantages and disadvantages and perhaps the major advantage in the digital area is that software can be applied to the numbers when the basic signal has been converted. Digital has noise problems when the measurements are made. It also has inherent measuring errors and the bit depth aspect which can be another problem. In some ways these get together and give the same sort of problems that analogue has. The other aspect is the feasibility of accomplishing something in domain or the other. A rather grey area that usually should come down to value judgements but sometimes doesn't.

John
-

Page 3 of 5 First 12345 Last

#### Posting Permissions

• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts
•