Page 5 of 5 FirstFirst ... 345
Results 81 to 87 of 87

Thread: Gamma revisited

  1. #81
    dje's Avatar
    Join Date
    May 2011
    Location
    Brisbane Australia
    Posts
    4,636
    Real Name
    Dave Ellis

    Re: Gamma revisited

    Quote Originally Posted by mikesan View Post
    Dave,
    Many thanks for your note (post #69). It is very informative, but I think it goes a bit beyond what I have so far digested.
    I was particularly interested in your case B. I am guessing that this pathway describes what occurs when writing a tiff or a jpeg file (or what occurs in the output of a raw converter). My understanding is that the linear digital data (pretty much just as it comes off the A/D converter) is written directly to the RAW file without any non-linear manipulation. If this is not the case then my logic is in big trouble. I have concentrated my attention on what happens to the raw data up to (but not including) the point where it is used to create a viewable image. May I ask how you have generated the data depicted in the graph?

    Simon & John,
    Very much appreciate your input to the thread. My grasp of the subject is still insufficient to make any useful comments or even formulate some intelligent questions. That will come, I hope.
    Mike I generated the curves in an Excel spreadsheet. You an access it here, it's not dis-similar to yours.

    Yes my Case B would be for the output of a raw converter or conversion to jpeg in camera. The raw data is linear as produced in camera however some manufacturers such as Nikon do some compression on it before export (which could include a non-linear curve). However as far as I know in such cases, when the raw file is imported into a raw converter, it is de-compressed back to a linear state. The compression is quite separate to gamma encoding and is used to create smaller files.

    Dave

  2. #82

    Re: Gamma revisited

    Quote Originally Posted by Simon Garrett View Post
    If you mean that tone mapping is to do with bit depth, then to my understanding not directly, but maybe we're talking at cross purposes. I think we've covered all the ground, so I'll leave it at that.
    Yes we certainly have and much of it is not relevant. When gamma correction was first used no one had ever heard of bit depth. There were no digital devices of any kind.

  3. #83
    dje's Avatar
    Join Date
    May 2011
    Location
    Brisbane Australia
    Posts
    4,636
    Real Name
    Dave Ellis

    Re: Gamma revisited

    Quote Originally Posted by Simon Garrett View Post
    Dave, interesting analysis. There's another step, I think. What happens in cameras (when creating jpegs) is as in your case B: A/D conversion and then application of a tone curve. However the the captured digital data is in 12-14 bits (for typical current sensors). The tone curve is applied, and then the data is reduced to 8 bits.

    This is the extra step: reduction from 14 bits to 8 after the tone curve is applied. As a result, the tone curve results in more accurate perceptually-equal distribution of steps. In terms of improving perceptual quality (reducing quantisation noise at the black end) there wouldn't be much point in applying the tone curve if converting to 8 bits were done before applying the curve.
    Simon

    I think I've got to the bottom of this and I think you are correct. I plotted out an 8 bit gamma encoded curve using Excel and then did a bit depth reduction to 5 bit using simple rounding to get the new integer values. The 8 bit is shown by the red markers and the 5 bit by blue. You can see from this that the 5 bit data does effectively have more samples at the bottom end of the range than the top. A bit reduction of linear data is shown in the second curve and this shows uniformly spaced samples.

    Hope this is of interest to you too Mike. Basically to summarise, when you apply a gamma curve digitally to a 14 bit raw file, the sample points are still uniformly distributed. It's only when you reduce the bit depth to 8 for saving as a jpeg that the samples are biased towards the bottom end of the range.

    Dave

    Gamma revisited


    Gamma revisited
    Last edited by dje; 24th January 2015 at 07:51 AM.

  4. #84

    Join Date
    May 2012
    Location
    Southern California
    Posts
    79
    Real Name
    Mike

    Re: Gamma revisited

    [tic] To all my fans who are sitting on the edge of their seats in anticipation of Part 4 of my saga:[/tic]**

    I have not given up, but I realized that I needed a pause to consider what Dave (posts #69 & 83) and Simon (post #70) have written. Also I have been spending several hours each day watching the Australian Open. I would love to visit Melbourne.

    Looking a bit ahead,in my next episode I intend to explore how increasing bit depth is used to produce a linear quantitized digital output that does not lose valuable information (i.e. one that retains the granularity needed for a perceptually uniform result). It is clear that I must eventually contend with the question of how 12 - 14 bit data is then reduced to 8 bits for further processing without losing the enhanced precision.

    Dave suggests that this is achieved by applying a gamma adjustment to the linearly acquired digital data. From studying his spreadsheet (post #69) the math is clear, but the logic escapes me; particularly the switch of gamma from 2.2 (for analog values) to 1/2.2 (to obtain digital values). I need some help.

    Let me explain my thought process for converting analog (voltage) data to digital.

    1. Bit depth of the A/D converter determines maximum number of intervals (i) into which the analog data will be quantized. Vmax is the maximum voltage that can be produced by by the sensel (full well).

    a) If quantization is to be linear the width of each interval will be the same (Vmax/i).
    b) For non linear quantization application of gamma creates intervals of unequal size.

    Voltage bounds for each interval determined (in A/D converter). Each interval is identified by a digital integer (0 - i).

    2. For each sensel, read voltage from chip and determine into which interval it falls. Write the corresponding integer to the file.

    One then ends up with a file containing only integer values. I am assuming this is what occurs in case B, the first block of Dave's flow diagram. The second block (gamma conversion of the digital data) must (I guess) occur in software, after the digital data have been written to the file. It may be assumed that the software "knows" the width of the voltage intervals used to create the data and thus the voltage boundaries of the linear quantization. What happens next; how is gamma applied to this digital data??

    Based on the Exel file which Dave kindly supplied, the linear voltage boundaries (normalized, 0 - 1) are then adjusted using a gamma = 1/2.2. This produces a new set of voltage boundaries, but what data is included in these intervals and how is it sorted; the analog data (voltage read from chip) is no longer available. You can see I'm a bit lost. Perhaps I might find some enlightenment if Dave would supply the Excel spreadsheet used to construct the graphs in post #83, but I am not optimistic.

    To help myself I have copied the Excel spreadsheet from Dave's post #69 and added sheet #2. You can find it here:

    https://www.dropbox.com/s/ec3sbz0vfz..._mod.xlsx?dl=0

    Column A: Integer values for 5 bit data

    Column B: Voltage bounds for linear quantization (as %Vmax)

    Column C: Voltage bounds for gamma (2.2) adjusted linear data (as %Vmax). This is the same data listed by Dave in sheet #1, col. B; except he has presented it as a factor of 31.

    Column D: Quantization interval (bin) width for gamma (2.2) adjusted bounds. I note that while the width resulting from linear quantization is a constant 3.2% Vmax, here the intervals begin much narrower and gradually increase, only exceeding the linear intervals at bin #17. To me this is the most important feature of gamma adjustment. Increasing the precision (finer granularity) with which the analog data is captured at the lower end of the voltage range, where the eye is more sensitive to smaller relative changes in luminance (perceptual uniformity). Note that this effect is only realized with gamma values > 1.

    Column E: Voltage bounds for gamma (1/2.2) adjusted linear data (%Vmax). Dave makes the same calculation (sheet 1, col C) but expresses the result as a factor of 31, rounded to an integer. I have no clue of the underlying logic. Clearly this is where I get lost.

    Column F: For comparison to col. D, I list the "quantization interval widths" calculated from the bounds given in col. E. I note that the first interval is very wide and thereafter the intervals decrease gradually. Seems that this is a consequence of gamma < 1.

    Just for fun I have graphed the quantization interval widths (as %Vmax) for both conversions. See Chart 1 of the spreadsheet. I can draw no conclusion from this data. Clearly I am now a bit lost and need some help.

    **[tic] = Tongue in cheek. For the benefit of any to whom that was not obvious.

  5. #85

    Re: Gamma revisited

    Gamma is the exponent of the luminance transfer function of the output device. Input to the transfer function is reduced by taking the corresponding root. This is done to accommodate dynamic range and the response of the eye. Bit depth schmidt depth.

  6. #86
    dje's Avatar
    Join Date
    May 2011
    Location
    Brisbane Australia
    Posts
    4,636
    Real Name
    Dave Ellis

    Re: Gamma revisited

    Mike hope you are enjoying the tennis, much more relaxing than trying to figure out gamma !

    To answer your specific questions regarding my first spreadsheet :

    For case A, ie gamma applied before A/D, the values in column A are the input values to the A/D converter and the values in column B are the original signal input values to the gamma encoder. Col B uses an inverse of the gamma encoding formula because it is an inverse calculation, ie calculating input from output. So Col B is the original signal input value you would need to get outputs from the gamma encoder equal to the values in Col A. The blue curve is a plot of Col A (vertical) against Col B (hor). The output of the A/D converter will be a quantized or rounded version of it’s input (but rounding is not necessary in my calcs because I’ve used integer input values anyway).

    For case B, ie gamma applied digitally, the values in Col A are the original input signal (which are the same as the A/D converter output except that they are analogue rather than digital). The values in Col C are the A/D output values multiplied by the gamma encoding function and rounded to the nearest integer value. The red curve is a plot of Col C (vert) vs Col A (hor).

    Hope this helps to get a better understanding of what I've done. I think it best to leave it at that for now until you've had a chance to digest this. Could I also suggest that most readers have probably tuned out now and maybe if you need further clarification it might be better done by personal message.

    Dave

  7. #87

    Join Date
    May 2012
    Location
    Southern California
    Posts
    79
    Real Name
    Mike

    Re: Gamma revisited

    Quote Originally Posted by dje View Post
    Hope this helps to get a better understanding of what I've done. I think it best to leave it at that for now until you've had a chance to digest this. Could I also suggest that most readers have probably tuned out now and maybe if you need further clarification it might be better done by personal message.

    Dave
    Dave,
    Many thanks. I think you are correct; the general interest has vanished. I shall accept your generous offer to discuss via PM.

    Mike.

Page 5 of 5 FirstFirst ... 345

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •