Low-Res Image Modes

greenspun.com : LUSENET : Imaging Resource Discussion : One Thread

Low-Res Image Modes (Followup of "PDR-M4 gives blurry image in any modes of 800x600")

I'm looking for an explanation of how a digcam takes low-res images, compared to how a low-res image is made from a larger one. This isn't necessarily specific to the Toshiba (I'm guessing).

I have a Toshiba PDR-M4 which has a 1600x1200 CCD. The camera also can store 800x600 images in one of two modes. One mode is to create the image at 800x600 using the "half-size" setting on the camera. The other way is to take a 1600x1200 already in memory and "re-size" it to 800x600.

Initially, I only used the camera at 1600x1200, but sometimes I used the "re-size" mode to reduce file size for less-important images. For display on my monitor (1024x768) the 800x600 looked just as good as the 1600x1200, no surprise there. So, for a recent trip where I needed maximum storage, I decided to shoot all the images on the 800x600 "half-size" mode. I only wanted to view them on the monitor so I figured the low-res mode would be fine.

I was wrong. When I got back from my trip and downloaded all the images, they looked poor -- "grainy" in the way only a low-res JPG can be. No matter that these were all shot at the maximum quality level for the 800x600 mode -- they just didn't look as good as the earlier 800x600s re-sized from full-res originals. I went back and confirmed this result with an experiment on a controlled subject.

So what's going on? My guess is that there's a reasonably smart algorithm in the "re-size" feature that preserves as much of the original info as possible by averaging four pixels together. Meanwhile, the 800x600 "half-size" mode probably just skips every other pixel in the CCD and throws away the extra info. True or false?

Any comments?

Thanks Dave Carr

-- Dave Carr (davidlcarr@aol.com), July 06, 2000

Answers

I'm not exactly sure why what you described would happen, but I'll take a shot at it. First of all, a 1.92MP effective sensor only has about 1601x1201 single color filtered "pels", not 1600x1200 full 24 bit pixels. Each of those elements only records color information for the color filter that's placed above it. They're often laid out so that each block of 4 has one red, one green and two blues, though some designs have two greens and one blue in each block. So once the camera combines a 2x2 block of pixels in each dimension, you really only have 800x600 actual 24bit color depth pixels. (I think the two that are the same color are averaged to provide an 8 bit value like the other two colors -a guess)

It's my belief that in order to produce a 1600x1200 image the camera uses the combining method mentioned above as well as then creating intermediate pixels by taking the pair of "pels" along each adjacent pixel and uses that 2x2 block of pels formed from half of each of the two pixels it's borrowing from to create a new pixel. If you do this in both horizontal and vertical directions you get 1 less than twice the number of "real" pixels in that dimension, and 4X's the number of raw real pels in the sensor for Hi res. images. So in effect to pull this off you really need 1601x1201 pels in the sensor, which yields 800 x 600 real pixels and 1600x1200 with interpolated ones. Each half of all but the outer rows and columns of pixels are combined with the adjacent half of the next pixel to form those interpolated pixels.

So my guess is that in 800x600 mode you get the raw real 800x600 pixels the sensor is capable of, and in 1600x1200 you get the massaged output of the sensor with the smoothing effect of the interpolated pixels that are formed between each adjoining pixel. I think this interpolated output has smoother transforms between one pixel and the next and because of that actually looks better than the raw 800x600 pixel mode.

There is yet another way to create an 800x600 image: digital zoom. Now if things are done the way I described, the digitally zoomed images should look nearly as smooth as the 1600x1200 images because they are only actually using 400 x 300 of the "actual" pixels formed by 801 x 601 pels from the center of the sensor to produce the then interpolated 800x600 final pixels.

That might be a good way to test and see if there is a difference. Try comparing some shots taken in 1600x1200 mode, but digitally zoomed against some normal 800x600 mode shots of the same subject. If the digitally zoomed shots look smoother(better) that would explain it.

That's my guess, anyhow. Anybody else?

-- Gerald M. Payne (gmp@francomm.com), July 07, 2000.


Moderation questions? read the FAQ