I'm interested in comparing lens MTF-50 resolution figures with sensor resolution. It would be interesting to know at which point (in terms of choosing a lens or aperture) a lens would be sharp enough to outresolve the sensor.
MTF-50 figures are given in LW/PH. For instance 2000 LW/PH means that 2000 line widths are resolved across the vertical length of the picture with a 50% loss of relative contrast. Because one looks at a pattern of alternating white and black lines, one can speak of 1000 lines (say black lines on a white background) or 2000 line widths.
Now we can assume the vertical height of the picture to be the height of the sensor, say 15.7mm (ignoring unused pixels). This allows to relate the LW/PH resolution to the number of vertical pixels.
I believe in order to achieve a relevant comparison one has to do the following: Find out how many line widths per millimetre cause a 50% loss of relative contrast when viewed through a grid with the resolution of the vertical sensor resolution. Is it as simple as saying that 1.3 x the number of line widths will cause that resolution loss? If I see 1.3 N line width through a N line width grid, then each grid line blurs 1 + 1/3 line widths so the contrast between adjacent grid line widths should be 1/3 : 2/3 = 0.5 (50%).
So am I right that 2008 vertical pixels correspond to ~2677 LW/PH?
This assumes a monochrome image. Due to the bayer array structure of the sensor, the colour resolution will be reduced. The minimum should be ~1339 LW/PH, but in reality it should be somewhere between these values as interpolation can recover information for say image positions occupied by red pixels by using information from adjacent green pixels. This will lead to real resolution gains as long the light that hit the red pixel had some green component.
Is there an error in my thoughts and/or is it meaningful/reasonable to use such an assumed sensor MTF-50 figure to reason about a sensor being outresolved by a lens?
In the aboveI omitted the anti-aliasing filter. What could be considered to be a very aggressive anti-aliasing filter? One that spreads the light for one pixel so that adjacent pixels are covered half?
Also, I'm wondering how to correctly interpret the PH (picture height) in LW/PH. It seems that one cannot simply assume say 15.7mm PH for a measurement that was taking on a sensor with 24mm height. The only way I think that PH can be independent from sensor size is if it is independent of sensor size but refers to the image circle size of a lens. Does that mean that one has to convert MTF-50 resolution figures given for full frame lenses in order to understand their resolution on a APS-C sensor?
EDIT: I edit the above because I realised I made a hash of the multiplication factor. Should be 1.3 instead of 3x. At 2x the contrast is already completely gone.