Geometry of Resolution: 1280 Image Resized to 640 vs. Original 640 Image -- Which Has Better Res?

greenspun.com : LUSENET : Imaging Resource Discussion : One Thread

Like many others here, after years of film photography, I am about to get a digicam and jump into digital photography. After studying the information on this and other excellent websites, I still have a mental stumbling block about resolution, so I'd appreciate any input on the following quetion.

Assume that you take two pictures with the same megapixel digicam. The subject is a tree, with a large, textured trunk and many leaves, that fills the entire image area. Pic #1 is taken at 1280 size and pic #2 is taken at 640. After capturing the two images, you transfer them to your PC. Then, you re-size pic #1 down to a 640 size. You leave pic #2 alone.

Question: If you view both images (the 1280 resized to 640 and the original 640) side by side on a high-res monitor, should there be any difference in resolution/sharpness between these two images? Would the answer be different if, instead of viewing the images, you wanted to print them? Assume that EVERYTHING ELSE is equal: same camera, same optics, lighting, exposure, same compression level, etc.

Is it correct in that in pic #1 (1280), because more pixels are devoted to CAPTURING the image, scaling the size of the image (not cropping) down to a smaller size (e.g., 640) should be sharper than the same scene captured with fewer pixels in the first place?

Many thanks for your help.

Glenn K.

-- Glenn K. (lawman_911@hotmail.com), December 11, 1998

Answers

It is much sharper. This is easy to see with any sample image from (eg) a D-600L. In the case of the Olympus, the SQ mode is murdered with high compression, though, so you do have to shoot the HQ shot and scale on your PC.

-- Ben Jackson (ben@ben.com), December 13, 1998.

Glenn,

It seems to me that your question isn't so much about resolution, but about image manipulation and compression. Resolution, somewhat strictly stated :), is simply: how many of something there are in a given measure. In this case, 640 dots per image width vs 1280 dots per image width. Since resolution is doubled in each axis, a 1280 x 960 pixel image has 4x's as much information(pixels) contained in it than a 640 x 480 res. image does. It seems to me that a resized 1280 dot wide image may or may not "look" better depending on how the program that is used to manipulate it performs the reduction.

There are several ways this could be done since there are 4 pixels in the original image for each single pixel that is needed to form the new image. A dumb routine might just pick a single pixel from each block or use only a pair of diagonal pixels to average, to achieve greater speed. An intelligent routine would probably average the color values of all 4 pixels in each 2x2 square block of the old image to create the pixels for the new image. This might look pretty good, since the tonal changes between adjacent pixels in each block would be averaged. However, it also would tend to cause the final reduced image to have a more abrupt tonal change between adjacent pixels. Or as some people might refer to it: More Sharpness. This is really just an illusion caused by the more abrupt tonal changes on object boundaries in the image. But it does look sharper. You'd probably get similar or perhaps better results using a sharpening tool in a graphics program to sharpen the higher resolution image since you'd be able to control at just what level certain pixels are converted to other colors to achieve a "sharpening" at the edges of object boundaries while still keeping a lot more unaffected pixels (INFORMATION) in your image.

If you want the best image when printing use the larger file with more information. INFORMATION is the key to resolution and better images. When your printer prints a 1280x960 image at 720 dpi, at a size of 8"x10" the printer is getting about 128 (1280 pixels/10") pixels per inch. This works out pretty well, since it can now use about 5.625 (720/128) dots across the page to represent each pixel in the image. Your eye obliges somewhat and happily combines these dots to form an averaged tone. In actuality, since the image is 2 dimensional you get about 33 to 34(5.625 x 960pixels/8") dots on the page (in a block) to represent each pixel of the original print. This is how printers with only 3 or 6 colors can seem to print thousands/millions(?) of tones.

In fairness, some people prefer the look of a sharpened image when printed or viewed. Perhaps an image left at it's original resolution and slightly sharpened would look best. It's all personal preference.

Some people think digital images look softer because more tonal values are recorded and smoother transitions occur from one dot to the next on object edges. Personally, I'm more of the mind that because there aren't enough pixels in the CCD's yet, the edges are blurred a bit because a single(averaged)pixel from the CCD is used to represent both the edge of an object and the background behind it. But that's just one man's opinions/ observations. Judge for yourself.

-- Gerald M. Payne (gmp@francorp.francomm.com), December 13, 1998.


You may not be asking the right question here. Controlling the number of pixels may less important than file size and it is not the same thing. For web work, for example, you want the best looking image for least number of bytes. If that is the case it is far (nay dramatically) better to highly compress a 1280X960 than to use a low compression 640X480.

-- Rick Griffen (rgriffen@vabch.com), January 14, 1999.

Moderation questions? read the FAQ