True resolution of single CCD colour digital cameras?

greenspun.com : LUSENET : Imaging Resource Discussion : One Thread

I have recently learnt from newsgroup rec.photo.digital that most digital cameras have a CCD pixel layout which looks like the following:

G B G B G B ....... G B R G R G R G ....... R G G B G B G B ....... G B etc. etc

Each of the above letters represents one pixel, and the letter indicates the colour of the filter which is used for the associated pixel. (Red, Green, or Blue). Each one of the above pixels counts as one pixel in the CCD resolution specifications. For example, my camera has a specified resolution as 1280 x 960 pixels, which means that it has (1280 x 960)/4 = 307200 groups of RGGB pixel clusters.

I had always *assumed* that one pixel in the specification meant one RGB triad, meaning that a true, colour sample could be made for each and every one of the specified pixels. This is obviously not the case. (even if each pixel was an RGB triad, I do understand that aliasing would still occur, unless an anti-aliasing filter were employed in front of the CCD)

Other things I have learnt about the above scheme: - The raw CCD data is converted to a "proper" RGB image, with the RGB values being obtained for each pixel by applying some weighted average which incorporates the values from neighbouring pixels. This process is referred to as "demosaicing". - It seems likely that the luminance resolution is higher than the chrominance resolution, although I have yet to see anything definitive on this. - It seems likely that the green pixels are often used for luminance. - It seems pretty certain that the reason more green pixels are used is because human vision is more susceptible to green

I have so far been unable to locate a *really* thorough description of this. I think that the imaging-resource web site would be a very good place for such a description. Naturally I am keen for replies to be submitted in this forum too. (it is also currently being discussed in newsgroups rec.photo.digital and sci.image.processing)

The specific questions I have are: - Is it possible to specify a true chrominance resolution for the above scheme, in terms of the number of chrominance samples in the array? - Same question as previous, but for luminance - Is the *perceived* resolution perhaps higher? If so, how/why?

Please note that even after discovering the above, my 1 megapixel camera still takes very good photos, and I am still happy with it. ;) Also, this method of using a single CCD in combination with a mosaic filter seems to be very common, and has been used for a long time. The resolution of other equipment is specified in the same way.

Regards, Greg

-- Greg Sullivan (gregory.sullivan@digital.com), October 06, 1999

Answers

Arrghh - my careful formatting of my post did not work. I'll try something else:
The CCD array looks like this:
G B G B G B ....... G B
R G R G R G ....... R G
G B G B G B ....... G B
etc. etc Hope that worked....

-- Greg Sullivan (gregory.sullivan@digital.com), October 06, 1999.

My guess is chrominance is about 1/3 the true pixel resolution.

I'm pretty sure Luminance would be equal to the true pixel resolution -- since the transmissivity of each of the three color filters covering each pixel is known and thus can be compensated for.

This apparent lack in luminance resolution is similar to what goes on with current technology analog color TV system -- it works cause the human eye has far less color sensors than intensity sensor, and is satisfied with less color info.

-bruce

-- bruce komusin (bkomusin@bigfoot.com), October 06, 1999.


I think it's 1/3rd or 1/4th. Perceived pixel count is somewhat better because of interpolation. You're right that the CCD's usually don't have the component colors in the same proportions. I think a common ratio is 20% red, 20% blue, 40% green.

-- benoit (foo@bar.com), October 06, 1999.

wow, how embarassing, my math doesn't add. There's another color in there, that also has 20%, with green getting twice as many as the other colors.

-- benoit (foo@bar.com), October 06, 1999.

If anyone is interested, see my previous response at http://www.greenspun.com/bboard/q-and-a-fetch-msg.tcl?msg_id=00107s to an older forum question.

I'm with Greg about the Quads of filtered pixels, and think that "virtual" pixels are probably most easily created created by combining the inside row or column of a quad with the adjacent row or column of the next quad, depending on which direction you're going, to combine two - 1/2 quad values to create a virtual quad/pixel. In other words, you end up with a new virtual pixel formed from the inside strips of each adjacent quad both vertically and horizontally. You might notice that this method has the built in "advantage"(?) of averaging the values of "half" of each quad to create an averaged virtual quad/pixel to toss in between them. It would seem that would help somewhat with aliasing by creating an averaged value between adjacent raw pixels, it might also help to account for the "less sharp" raw images some people complain about, but others feel more smoothly flow from one pixel to the next without as many obvious abrupt changes.

So far, the above explanation is the only one that seems to make sense to me in terms of pixel count, physical layout of the filter colors, speed, and simplicity. You'll also note that this method is only necessary to get the higher touted res. image from the sensor. The normal lower resolution image(640x480 out of a 1280x960 virtual layout) can be plucked directly from the quads without further combination to create virtual quads. Maybe that accounts for why some people think the 640x480 images are sometimes "sharper" than the 1280x960 images produced by the same ccd array. It's probably simply because there are no virtual averaged pixels in between the "actual" ones formed from the array.

I can only think of two ways to eliminate these effects. Some sort of beam splitting optics(prismatic?) with 3 separate single colored filtered pixel arrays(construction/alignment nightmare) and a mechanical filtering mechanism that takes three shots with the same CCD with a different colored filter over it. I think both of these approaches have been done before in either videocams or digicams. Seems to me, that some bright fellow or lass needs to figure out how to do the mechanical trick with a very fast acting LCD film material of some type. That would allow for a number of pixel grabs at precisely the same location(within the limits of any change produced in the refraction of the filter due to color?) and allow for full luminence information... I think... :-) Maybe it turns out that the material doesn't exist yet, or that the current method is both cheaper and just as effective? As far as I know, the human eye doesn't have multi-colored filters placed over the same sensors so maybe the designers decided that if the single colored filter per sensor approach was good enough for the most advanced mechanism known to man, the human being, then it was good enough for them... :-) Nah, too easy.

-- Gerald Payne (gmp@francorp.francomm.com), October 07, 1999.



I seem to recall that some top-end CCD cameras do (or used to) use a beam splitter and three monochrome CCD arrays, much as the first colour plate cameras used three filters and three films.

-- Alan Gibson (Alan.Gibson@technologist.com), October 07, 1999.

If I take an image that's supposed to be 1600 by 1200 and look at it in photoshop and start to `zoom' in, I see what seems to be 'blocks' as the image is blown up. Does each block represent a pixel or are they a result of Photoshop's display routines?

-- Al Pacheco (pacheca@polaroid.com), October 12, 1999.

Moderation questions? read the FAQ