I've been toying with this idea in my head but wanted to get some other's perspective to see if my thought process is correct.
I have a DSLR (5Dmkii) which has a 21 megapixel sensor with a bayer filter on top. Thus, there's a loss of image resolution, due to the interpolation/demosaicing process, especially when compared to digital backs (which record three color values at each pixel site).
So my thought process was to cut the image resolution in half and use the raw data from the sensor to directly assign the RGB values. Would that give me full resolution? And, does anyone know how I would go about converting the recorded "green", "red", "blue" to the correct colorspace? There's also the problem of each 4 pixels would contain 2x green channel information so I'm not sure how to handle that data.
It would also be interesting to try this out with high f/stops to make the lens diffraction limited. I figured if I can stop it down to ~f/16-f/22, that would cause the size of the Airy disk to encompass 4 pixel sites on the sensor which would in turn fake 4 different CCDs at one pixel site. Does that logic pan out?