# Thread: Color channels, 8, 12, 14, 16 bits

1. ## Color channels, 8, 12, 14, 16 bits

Good day everybody:
I keep hearing about color channels, but I do not understand them well.
I hear about 8, 12, 14 bits and lately heard about 16 bits (if ever exists).
Any clarification about what these are and the difference between sRGB and RGB and RAW and their relation to printing.
How does Photoshop deal with each of them, and where and when can I see the difference?
Many thanks in advance for everyone who contributes in explaining this issue.
Hafedh

2. ## Re: Color channels, 8, 12, 14, 16 bits

Bit depth just refers to the accuracy of the binary number used to represent the colour data, and it has no relation to the colour space that is being used. When we refer to 8-bits per channel; we know that each colour channel can represent 256 different values of each colour - red, green and blue; 256 x 256 x 256 = 16.8 million distinct colours. We have to be careful because this is sometimes refered to as 24-bit colour (i.e. 8 x 3).

The same goes on. So 16 bit colour is where each channel can represent 4096 individual shades = 4096 x 4096 x 4096 = over 69 billion distinct colours. Again, this is sometimes refered to as 48-bit colour.

The next part of the question deals with the degree of precision that your camera records data. Jpegs are 8-bits per channel, and depending on your camera, RAW data can be either 12-bit (ususally amateur DSLR), 14-bit (usually professional DSLR) or 16-bit (some medium format cameras). Computers work in 16-bit digits so even 12-bit and 14-bit colours are stored as 16-bit values, but are packed with leading "0"s.

RAW is just the way your camera's sensor and processing engine store the data about the image; there is no colour space associated with RAW data.

RGB is something called a colour space (also called a gamut); essentially a mathemtical way of descibing any colour as a tupple of three values (R,G,B). There are three common colour spaces that Photoshop uses; sRGB, which is an old Microsoft / HP construct that is the basis of a lot of web based functions. It is a subset of all visible colours. A newer standard AdobeRGB is also used in Photoshop and was developed by Adobe to better represent colours that can be displayed on a computer screen and printed on a printer. It has a broader range than sRGB and displays a more vivid colour range, especially in the greens. ProPhoto was developed by Kodak and covers an even wider range of colours (especially in the vivid greens and blues). If you had a computer screen that could accurately display all of these colours, a value of (0,255,0) (assuming we are representing the maximum value of green as an 8-bit data element); i.e. green would look different in all three colour spaces. This is why Photoshop needs to know which colour space you are using.

Here is a diagram of the three colour spaces and the visible light representation:

http://en.wikipedia.org/wiki/File:Colorspace.png

3. ## Re: Color channels, 8, 12, 14, 16 bits

Thank you very much dear Manfred. an excellent explanation that I understand very well except for one thing; as I understood you mentioned that RAW data can be either 12 or 14 bits, but later on you mentioned that RAW has no color space associated to it. I think you mean color space is different from color data. I f I am right what is the difference between both.
I feel a bit confused.
also, how about TIFF files, what color space do they carry since you mentioned that JPEG carries 8 bits.
Thanks again.

4. ## Re: Color channels, 8, 12, 14, 16 bits

Originally Posted by mahfoudhhi
~ except for one thing; as I understood you mentioned that RAW data can be either 12 or 14 bits, but later on you mentioned that RAW has no color space associated to it. I think you mean color space is different from color data. I f I am right what is the difference between both.
I feel a bit confused.
also, how about TIFF files, what color space do they carry since you mentioned that JPEG carries 8 bits.
Hi Hafedh,

I think the confusion is indicated in how you have written the last sentence (highlighted bold), it is a lot to take in when you're new to this and even now I 'check my facts' before replying to queries like this. (I don't want to make it worse!)

To clarify:
color-space is not a number of bits (e.g. 8 or 16).

So an 8 bit jpg (or a 16 bit Tiff) can be one of these color-spaces; Pro-Photo, or aRGB (Adobe RGB), or sRGB.

Color-space can be thought of as a scaling factor, what does that mean?
This is how I think of it, rather than being factually 'correct', in the purest scientific terms.

Take a look at this diagram which I have borrowed from this Photozone article because it allows me to say what I want to - but ignore all the numbers and scales, all I am concerned with is the triangles drawn and the "D65" points in the middle.

The centre "D65" points refer to a given white balance (6,500 Kelvin), so anything there, whether it be white, grey or black, has no colour. Colours are represented in the image by moving away from D65 point towards one of the three corners of the triangle (or in between any two).

Looking at the chart on the left; if your camera is set to sRGB and shoot the most saturated green leaf you can find, the camera's jpg will scale the 255 levels of the 8 bits along a straight line between the D65 point and the green point of the triangle above it.

Looking at the chart on the right; if your camera is set to AdobeRGB ("aRGB") and shoot the same saturated green leaf, the camera's jpg will scale the 256 levels of the 8 bits between the D65 point and the green point of the triangle above it, which is considerably further away, so the 'steps' between each of the 256 levels are further apart.

So the green recorded in the jpg is nearer reality with aRGB, but if I were to publish a jpg on the web using anything other than sRGB, it is very likely half the web viewers in the world would see my pictures with desaturated colours because many web browsers/operating systems (incorrectly) assume it will be scaled to sRGB and display it according to that scale, which dilutes the saturation of the image (and causes slight rotation which gives a hue change).

Unless you are shooting professionally - and I am not - I like to 'keep things simple' and do everything in sRGB - with the monitors and printers I use, I am never going to miss those 'extra' colours at the extremities of the colour triangle Manfred linked to.

Sure I could shoot aRGB and convert before publishing, but since I don't own expensive enough monitor or printer display much of it, what's the point? (for amateur/hobby use)

Everything I said above about "the camera's jpg" actually applies to whenever you make the jpg/Tif from the RAW.

Since I shoot RAW whenever possible, I am not really limited by what color-space the camera is set to, I could process the RAW images in aRGB or Pro-photo when in Photoshop and remember to convert to sRGB when I make the jpg for web use - but I like "simple", so stick to sRGB everywhere

Hope that helps,

5. ## Re: Color channels, 8, 12, 14, 16 bits

Hafedh - Dave has gone through a great technical explanation; but let me try a bit of an analogy to try to clarify things a bit.

Think of a road map. By itself it is quite useless because you see lines and points, but you really don't know what the distances are. If you then look at the map scale; either drawn of sometimes in words; 1 cm = 1 km or 1 cm = 50 km tells you how to interpret the data on the map into something meaningful. You can now tell how far two places on the map are located from each other.

In some ways, you can look at the RAW file as the lines on the map, and the colour space the scale as how to interpret the data. A map without a scale and the scale itself are not directly related to one another; but without the two together the map is useless.

A RAW file is just pure data out of your camera, when you run it through a RAW converter, you determine how the RAW data should be interpreted by locking down the white balance, etc. You have in fact told your computer software how to interpret that data. A colour space tells your (and other) computers how to interpret the data when your image is opened.

6. ## Re: Color channels, 8, 12, 14, 16 bits

Originally Posted by mahfoudhhi
Thank you very much dear Manfred. an excellent explanation that I understand very well except for one thing; as I understood you mentioned that RAW data can be either 12 or 14 bits, but later on you mentioned that RAW has no color space associated to it. I think you mean color space is different from color data. I f I am right what is the difference between both.
I feel a bit confused.
also, how about TIFF files, what color space do they carry since you mentioned that JPEG carries 8 bits.
Thanks again.
Hafedh
Cheers
Tim

7. ## Re: Color channels, 8, 12, 14, 16 bits

One other thing to remember Hafedh is that with a raw file, the pixel values don't even have a colour. This is because there is a colour mosaic filter in front of the sensor which lets only one component of the primary colours (red blue or green) through to each pixel. The other two colour components have to be estimated from surrounding pixel values in software (de-mosaicing).

Dave

8. ## Re: Color channels, 8, 12, 14, 16 bits

I am grateful to you Dave Ellis, Dave Humphries, Tim, and Manfred.
Thank you very much for your ample time that you spent to guide me through. It is kind of you.
I am still a bit confused mabe because my reading comprehension is not good enough.
can I ask you for a favor again.
I need the following:
* difference between color space and color channel.
*What does RAW have, color space or color channels or just data? and why do we say that RAW has 12 or 14 bits?
*what is the difference between Raw 12 bits and 14 bits?
* what contains bits? color space? or color channel? or RAW?
*what does bit affect? colors? or post processing? and if RAW does not have color space, what does this mean?
I will be thankful if I get answers for these questions, but I do not promise that I will not come up with more questions after your answer!!
please accept my apology for my (not smart) questions, and thank you in advance.

9. ## Re: Color channels, 8, 12, 14, 16 bits

Originally Posted by mahfoudhhi
* difference between color space and color channel.
A colour space is for all intents and purposes a standard way of describing colours. A colour channel is one of the primary additive colours; Red, Green and Blue. All visible light can be described using the contribution of each of the three additive primary colours. No values of each of these channels will produce black while equal values of these channels will produce white.

The problem is that without a standard, there is no way of determining how to represent these colours in a consistent manner. If I told you that the something has a temperature of 300; you wouldn’t know how hot or cold the object is. If I were to quantify the temperature scale as, Kelvin, °C or °F, you could now place this value against these known standards.

We have exactly the same issue in describing a colour; as any additive colour can be described in terms of a red value, a green value and a blue value. Once we attach a colour space to these values, your computer will know how to interpret this data.

This is actually quite important to get accurate colours. If you work in the AdobeRGB colour space or the ProPhoto colour space and display your image on the internet or send the image off to most commercial photo printers, your image will look “muddy” because they use the sRGB colour space and will not interpret your image data correctly.

Originally Posted by mahfoudhhi
*What does RAW have, color space or color channels or just data? and why do we say that RAW has 12 or 14 bits?
RAW has data only and that data can be turned into an image if you know how to decode the data in a RAW file. Each camera manufacturer (with a limited number of exceptions) has a different way of storing this information.

Bit depth – digital cameras represent data in a series of 1s and 0s. For instance, for each colour channel;

12-bit black has the following colour channel data:

Red value = 000000000000
Green value = 000000000000
Blue value = 000000000000

14-bit black has the following colour channel data.

Red value = 00000000000000
Green value = 00000000000000
Blue value = 00000000000000

In other words, even though the two values of black have exactly the same colour (or technically lack of colour); the extra two bits of data allow colours that lie between black and white to be described at a finer level of detail in 14-bit data than with 12-bit data.

The bit depth you get from your camera is dependent on your camera; the manufacturer has chosen to present data in configurations of 12, 14 and in the case of some professional medium format cameras, 16-bits.

Originally Posted by mahfoudhhi
* what contains bits? color space? or color channel? or RAW?
Computer data contains bits. Of your three examples, only RAW data contains bits. Neither a colour space nor a colour channels have bits per se.

Values in a colour channel or a colour space can be represented by an integer (numbers without decimals) or real numbers (numbers with decimal places).

Originally Posted by mahfoudhhi
* what contains bits? color space? or color channel? or RAW?
A bit is the smallest unit of data that a digital computer can manipulate; no more no less.. A modern digital camera is a digital computer that can take pictures and it represents everything it captures as a series of values that are either “1” or “0”.

Originally Posted by mahfoudhhi
*what does bit affect? colors? or post processing? and if RAW does not have color space, what does this mean?
A bit does not affect anything. It is just the smallest piece of data that your camera or computer can handle.

RAW data is just data. It represents an image (and some other incidental data as well) that a computer can turn into an image using a piece of software called a RAW converter. Before we can generate an image out of the RAW data, we have to tell the converter how to interpret the data that the camera has recorded into an image.

Fortunately for us, the companies that produce the RAW converters update them for every new camera model that has been released so that we get an image that looks correct. Simply put, comparing the sensor data for each data point in the RAW data versus what a known colour test target should look like allows the software developer to convert the RAW data into colour values.

The only piece of data that the RAW converter is missing is how to interpret a neutral colour rendering; i.e. setting the white point. RAW converters generally default to the white balance that our camera is set to as the starting point. Once that is done, it “bakes” this information into our data in our PP software and we can start editing our image. The converter also has other features built in to let us tweak other parameters like white point and black point, etc.

This is also the point where it will assign the colour space that we will be using. Again, it will default to the colour space set in our camera, but this does not prevent us from selecting a different one.

10. ## Re: Color channels, 8, 12, 14, 16 bits

In addition to all the above, we should remember (and I'm sure most of us do) that computers work in things called bytes (8-bit), words (16-bit), long-words (32-bit), and them other things (64-bit). What that means is computers themselves including Editors and/or RAW converters do not process 10-bit 12-bit or 14-bit data as they do their excellent work. So data from your RAW file is converted to fit into 16- or 32- or 64-bit data. There are several ways to do that, not really relevant to this discussion. The terms 12-bit and 14-bit usually refer only to the resolution of the analog to digital conversion process inside the camera.

Hoping not to confuse anyone . . .

11. ## Re: Color channels, 8, 12, 14, 16 bits

Originally Posted by xpatUSA
The terms 12-bit and 14-bit usually refer only to the resolution of the analog to digital conversion process inside the camera.
I'm not sure that is true. The A/D conversion is on a sensor-by-sensor basis. But, AIUI, Bayer pattern chips use multiple sensors per pixel channel. If that is a correct understanding, then I expect that the 12-bit or 14-bit "channel" may be a composite entity derived from multiple sensors' data. But I don't know.

12. ## Re: Color channels, 8, 12, 14, 16 bits

Thank you very much everybody for your great efforts.
I really appreciate it.

13. ## Re: Color channels, 8, 12, 14, 16 bits

Good day everybody again:
What does it mean that RAW does not have a color space? and how does it affect the picture positively (not to have color space)?
I mean: the color space affects the colors of the pictures specially for printing or when browsing a picture on a wide gamut monitor, but why is it better to have RAW picture in terms of colors? away from the benefits of RAW for post processing?
also, does anybody know if apple provides wide gamut monitors from Mac (I have iMac, and wish to buy another one which is wide Gamut, if this is what it is called)?

14. ## Re: Color channels, 8, 12, 14, 16 bits

Originally Posted by mahfoudhhi
What does it mean that RAW does not have a color space? and how does it affect the picture positively (not to have color space)?
Hi Hafedh,

This is how I understand it:

By the time you come to "use" a picture (e.g. to print or display on-line), it will be in a different format; jpg, tif, etc. and these will have a color space.

The advantage of shooting RAW is that you can decide which color space to use later (when you Post Process) - i.e. when you know what you're going to use the image for.

If you shoot jpg, the color space (and much else besides) is "baked in" to the image and subsequently, this cannot (sensibly) be changed, particularly if the smallest color space (sRGB) was chosen in camera.

Best regards,

15. ## Re: Color channels, 8, 12, 14, 16 bits

Hi again Hafedh - colour space is about how a specific colour is displayed whereas the RAW data is about the quality (amount) of colour data. As per the various previous postings, colours on a screen are represented by three numbers that show the R, G and B values. If you look at the two colour spaces that Dave Humphries posted above in #4, look at the green point in the sRGB and AdobeRGB colour spaces; both are representative of (0,256,0) colour values, even though the colours are different. This is what a colour space does; it defines how the colours should be interpreted and are device independent.

This is what wide gamut is all about; AdobeRGB and other colour spaces like ProPhoto are capable of representing more shades of colour than sRGB. The next issue is that the various devices have different capabilities of producing colours. Take a look at the printer colour space in: https://www.cambridgeincolour.com/tu...obeRGB1998.htm

A normal screen (24-bit; 8-bits/channel) can reproduce the sRGB colour space. A wide gamut monitor can reproduce most of the AdobeRGB colour space (30-bit, 10-bits per channel or better). Up until about a year or two ago Apple was shipping TN monitors. I think they have gone to IPS units now; all of the major brands do make wide gamut monitors; I use a Dell unit). I'm not aware of any units that handle the ProPhoto colour space. Printers, buy the way have a much narrower colour range than even a low end screen. A typical TN (twisted nematic) screen can reproduce 16-million distinct colours; a printer does a few hundred thousand. If you are looking for a wide-gamut screen; look for one that uses in-plane switching technology (IPS), rather than TN. An IPS screen will be more expensive than a comparably sized TN one.

Finally getting at RAW versus jpg; RAW (depending on the camera you use) will normally be 12-bit/channel or 14-bit/channel colour (36-bit and 42-bit colour respectively). A jpeg is 8-bit/ channel colour (24-bit colour). It just gives more subtelty in the colours that can be reproduced. You might not notice this in a straight out of the camera image, but if you do a lot of post-processing to an image, you may find that the colours "block up" and the resultant image can look really bad.

16. ## Re: Color channels, 8, 12, 14, 16 bits

Originally Posted by mahfoudhhi
What does it mean that RAW does not have a color space? and how does it affect the picture positively (not to have color space)?
It might help to think of the RAW data as having a color model, not a color space. For a normal image, the raw data is in the form of R, G1, G2, B (two greens to match the Bayer color filter). If we now think of the two greens being averaged to form Ga, then the color model is simplified to R, Ga, B. Very much like RGB, but it is still only a model. There are also camera sensors that have other colors, I believe one has R, G, B and Clear. My Sigma produces Bluish, Greenish, and orangy-Red, 3 channels, not 4. In all of these models, the scales are not defined. Pictures produced directly from these color modeled data usually look a bit odd because the scale values should really have been converted to those that, for example, a color monitor can show correctly. This is why conversion is necessary and that conversion has to be from a color model into a color space.

but why is it better to have RAW picture in terms of colors?
The gamut of colors that can be captured by the sensor is much wider than sRGB, Adobe RGB or even ProPhoto. By starting with the raw data, colors may not be lost where they would have been in JPEG outputs, even with Adobe RGB selected in-camera.

Hope this helps,

17. ## Re: Color channels, 8, 12, 14, 16 bits

Originally Posted by mahfoudhhi
Good day everybody again:
What does it mean that RAW does not have a color space? and how does it affect the picture positively (not to have color space)?
I mean: the color space affects the colors of the pictures specially for printing or when browsing a picture on a wide gamut monitor, but why is it better to have RAW picture in terms of colors? away from the benefits of RAW for post processing?
also, does anybody know if apple provides wide gamut monitors from Mac (I have iMac, and wish to buy another one which is wide Gamut, if this is what it is called)?
Hi Hafedh

RAW numbers are nothing more than the digital output from the sensors. If we simplify how a sensor works, you can think of it as consisting of a big set of pixels each of which consists of three light collectors. Each collector is under a filter that lets through either red, green, or blue light. So the electronic data, per pixel, consists of three numbers depending how much light went to the collector.

The first important thing here is, that the electronics does not 'know' what colour filter was over the filter, and the number is dependent only on the intensity of the light that hit the collector, not its colour.
This set of number triples is the RAW sensor data. Depending on the sensitivity of the sensor (12bit or 14bit) there is potentially a vast number of these triples.

The number triples that are present on RAW date conform to an RGB colour model because each member of each triple represents an amount of red light intensity, an amount of green light intensity and an amount of blue.

The RAW data has to be calibrated against a standard in order to assign a particular colour to each number triple. Simply, that's done by photographing a standard colour chart, and correlating the numbers associated with each colour (and interpolating the colours that aren't on the chart). But the numbers are not standard - they're particular to the camera sensor that recorded them. The camera has its own 'colour profile' - the relationship between its number triples and observable colours.

The second important thing is that the colours that are recordable in these RAW triples cover an enormous gamut - larger than most (any?) output device can reproduce. Also output devices have their own relationships between the colours they can reproduce and the digital numbers that drive them (Their individual 'colour profiles'). For this reason, when preparing an image file for output it needs to be 'trimmed down' to a set of standard colours and and converted to a standard set of numbers that manufacturers can use as intermediate data to pass an image from one device (which has its colour profile) to another device (which has its, different colour profile. )

This shows the difference between an iMac monitor and my printer.

The standard sets of colour numbers - called colour spaces - such as sRGB, AdobeRGB, ProPhotoRGB are represent smaller colour gamuts (fewer colours, if you like) than the gamut of the camera, and so compromises are made about which colours are 'clipped' (abandoned).

To come back to your original question, the reason why RAW is best to work with, is that you can do so knowing that you are working with all the colours the camera recorded. You need only make the compromise to throw colour information away, at the stage of sending it to output (monitor or print) when you know the gamut of the output device: where will it make a visual difference? Where will it make no difference?

Your question as to whether Apple makes a wide-gamut monitor. As far as I know, the answer is no. Most Apple displays have a gamut that's about the same as sRGB (just like monitors from most companies).

FWIW I have abandoned my iMac monitor for photo work and connected an NEC wide gamut monitor to my Mac. The NEC monitor is capable of displaying most of the Adobe RGB colour space. This gives me a much better ability to compare screen colours and print colours because my printer's gamut is also close to Adobe RGB.

Does this help?

Cheers

Tim

18. ## Re: Color channels, 8, 12, 14, 16 bits

Originally Posted by Macmahon
Hi Hafedh

RAW numbers are nothing more than the digital output from the sensors. If we simplify how a sensor works, you can think of it as consisting of a big set of pixels each of which consists of three light collectors. Each collector is under a filter that lets through either red, green, or blue light. So the electronic data, per pixel, consists of three numbers depending how much light went to the collector.

The first important thing here is, that the electronics does not 'know' what colour filter was over the filter, and the number is dependent only on the intensity of the light that hit the collector, not its colour.
This set of number triples is the RAW sensor data. Depending on the sensitivity of the sensor (12bit or 14bit) there is potentially a vast number of these triples.

Tim
Hi Tim,

I just put a post before yours where I carefully explained to Hafedh that raw data for most sensors consists of four numbers per image pixel. Now here we come saying there are three and calling raw data "triples". While that was probably intended as a simplification, it could confuse Hafedh no end, IMHO.

Panasonic Raw Data, from "RawDigger", a fine application, IMO:

Four channels, not three. Q.E.D.

19. ## Re: Color channels, 8, 12, 14, 16 bits

Sorry Ted. Yes, intended as a simplification:

Human colour response to green does require twice as many green 'collectors' but, as you noted, the numbers are combined to what you called R, Ga, B as part of the demosaicing process. I thought that was a step too far for the OP at this stage.

To all intents and purposes we think of the RAW data with which the computer works as consisting of number triples as long as we're working within an RGB model.

Cheers

Tim

PS: I'm certainly aware of the potential for confusion in all this stuff. Separating the wheat from the chaff, and the good wheat from the junk, is job all of its own!

20. ## Re: Color channels, 8, 12, 14, 16 bits

Ted - no confusion what so ever. There may be twice as many green photosites as there are red or blue in a Bayer array, and this is certainly the way the RAW data is saved. Add the demosaicing step to turn the data into an image, we are dealing with three colour channels with each colour made up of a red, green and blue value.

I can't seem to recall any abiity to manipulate two separate green channels in post....

Page 1 of 2 12 Last

#### Posting Permissions

• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts
•