Point and Shoot: reality “reproduced” or reality “rendered”?

Anhinga wing. Florida. Sony HX400V

Photo geek discussions of image quality often revolve around the presence or absence of artifacts in the image when viewed at high resolution and large sizes. The theory among serious photographers seems to be that a good digital image should have very few visible artifacts, no matter how big you blow it up.

Digital artifacts come in several flavors. One of the more obvious ones is color, tone, and detail smearing, often referred to as the water-color-effect. In areas of the image with very fine visible detail the colors tend to run together in a muddy mix, and fine detail looks smeared as though a wet brush had been dragged across it. This is especially evident in grass at distance and at the edges of the frame and foliage. Then there is postering. Areas of smooth tone, like the human face, clothing, or the sky take on a poster like look, with visible edges between two areas of tone that should blend into each other. Another is blocking, where jpeg compression creates a pattern of tiny blocks instead of smooth color gradations. This is generally accompanied by the jaggies…another jpeg compression artifact, that produces a step like line where a smooth curve should be. Finally there is an artifact called over sharpening, which produces hard edges and even halos (bright lines) along the edges of objects in the image, as well as contributing to the postering effect.

In addition, small sensor cameras can suffer from mottling and color noise in smooth tone areas…especially in the sky. This produces a blotchy, freckled look where there should be a smooth expanse of blue. Color noise is especially easy to see in dark areas of the image. There might be little rods of red, green, and blue scattered in the shadows.

Generally speaking, none of these artifacts can been seen at normal viewing size or in prints under 8×10, though in an image where they are very present, the effect can be general loss of subtlety. People might say the photographic image looks more like a painting than a photograph. Generally though, you have to view the image blown up to nearly full resolution on a good high resolution monitor or LDC panel to see the artifacts. They can also show up clearly in large prints made from infected files.

This image from an older Point and Shoot superzoom shows most of the painterly artifacts discussed above.

It is a pretty standard criticism of Point and Shoot cameras and sensors that the images have too many artifacts. Some photographers will argue that the built in processing engine in any camera that saves the images only as jpeg files will produce an unacceptable level of artifacts…since many of them come from jpeg compression, and less than subtle in-camera processing. This is why P&Ss that record images in the RAW format (unprocessed) are generally considered higher quality than cameras that do not.

There is a name for those photo geeks who are really hung up on the artifacts issue. They are called pixel peepers (since they blow images up until they can see the individual pixels) by those with a more relaxed attitude. Of course I am pretty sure the pixel peepers consider the rest of us to be something less than serious about our image quality.

I will admit to having gone through my pixel peeping phase. Only a few years ago, some P&S cameras had such complex and such obvious artifacts that it was very easy to be disappointed with the results for anything but casual use. The images really did look like bad paintings at anything bigger than your standard laptop screen size.

Recently though I have come to suspect that there is more to this artifacting issue than might be immediately apparent.  I began to wonder if the artifacts in the best of today’s Point and Shoot cameras might be intentional…the result of the aesthetic engineers attempts to get the best performance out of the tiny sensors in Point and Shoot cameras.

Part of my suspicion is fueled by the undeniable fact that the image quality of Point and Shoot cameras, at least when images are viewed at reasonable sizes, has improved steadily over the past few years…yet the pixel level artifacts remain.

And part of my suspicion is fueled by the realization that all digital images are in fact closer to paintings than to conventional photographs. All digital images are renderings of reality, not reproductions.

I believe what we are observing in recent Point and Shoot camera generations is that the computing power and the sophistication of the processing engines (software) built into today’s cameras has gotten to the point where the jpeg renderings of the files for display are simply very, very good…so good they consistently fool the human eye into seeing more detail and more subtle color that is actually in the file.

For years, the stated purpose, or at least the underlying assumption, behind digital photography as been to improve the technology so that the camera can accurately capture, or record, the full range of light and dark, every subtle shade of color, and finest detail of every texture that our eye can see in the world around us. And we have made great progress toward that goal.

However, the truth is that no matter how accurate our recording, to be of use, the data that we capture has to be displayed using a pattern of tiny glowing bits on a monitor or LCD panel, or transformed into a pattern of ink dots on paper that can be viewed by reflected light. The resolution and color depth of displays continues to improve, and printers to evolve, but we have to remember that, no matter what the camera records, we do not have an image until it is rendered for display.

And, of course, someone has to decide how the raw data is going to be translated into a file that will drive a display or printer. Most professional and many advanced amateur photographers want to be the one to make the decisions…admittedly subjective, aesthetic decisions…on how that translation is going to happen. They work with RAW files and process them at the full resolution and color depth the sensor provides, and only translate them for display or printing at the last possible moment.

But the fact is, of course, that no sensor made today can capture what the eye sees, and no display technology can display it. Therefore part of the process of translation is always to adjust the data captured to compensate for the limits of the sensor and then tailor that data to the limits of the display technology available. All with the goal, of course, of displaying what the eye saw, or at least what the mind (heart) intended.

That is what I have come to call rendering the image. All digital images today are rendered for display, in much the same way we understand that a painter renders the scene before his/her eyes or in his/her mind. We might use digital technology, but our photographs are as much paintings as the work of any impressionist, and actually use a very similar theory of imaging…breaking the image down into bits of color and pattern, and reassembling bits and patterns of color to represent what we saw. That is the essence, as I understand it, of impressionistic painting.

When the aesthetic engineers at the today’s camera companies are faced with getting the most pleasing results out of a tiny sensor, they have to make decisions based on how the image will be displayed. Knowing the likely limits of resolution and size of the display, and the likelihood that the display will be digital itself, they have opted to program the camera to render the image for apparent detail and smooth tones at those sizes.

This requires a different approach to rendering than you might use in an idealized large sensor camera.

A really good painting produces the illusion of much more detail than is actually there. Walk up close to any painting and see how quickly the image dissolves into artifacts…how close do you have to be, in fact, to see the individual brush strokes and blobs of paint? Or to see that what looked like grass in all its glory was actually a swath of green paint with some clever strokes of yellow and black that tricked the eye into seeing the detail that is not there? How close do you have to be to see that the fully formed human face that you appreciated from 6 feet is actually a single brush stroke with a suggestion of eyes and mouth dabbed in?

Okay, so that is an extreme example…but I believe it captures the essence of the quality we are seeing in today’s best Point and Shoot cameras…especially in the jpeg files the cameras are designed to produce.

Perhaps the aesthetic engineers at Sony, to pick a company often criticized for their artifacty images, are not attempting to produce a smooth toned, finely detailed reproduction of the world through the lens, so much as they are attempting to render an image that, when viewed or printed at reasonable sizes, produces a satisfying impression of fine detail and smooth tone.

Admittedly, if you pixel peep, the artifacts are still visible, just as you can see the brush strokes and blobs of paint in a painting if you get too close, but with each generation of Sony Point and Shoot cameras, with increasing pixel count and processing power…as well as increased software sophistication…the rendering of reality has gotten finer, more detailed, more subtle…more satisfying.

I do not believe there is any other way to get satisfying performance out of a small sensor. We know, in selecting a Point and Shoot superzoom that we are making a compromise based on flexibility and compactness. No other camera can offer us an equivalent range in such a tiny package. That is the attraction. To get that means a small sensor…and satisfying image quality at reasonable viewing sizes from a small sensor requires an impressionistic rendering of the image. That is just a fact of life.

In fact, it is pretty miraculous, and evidence of great skill and dedication on the part of the artist-engineers, that a tiny 20mp sensor and a tiny computer in the camera can render such a high quality image, in a compressed format like jpeg, that allows easy, fast file movement.

At the other extreme, at the true professional end of the photographic spectrum, we are seeing more and more high pixel count full frame sensor cameras…and more and more high resolution displays and printers. And ever increasing power in the desktop and laptop computers we use (even in tablets these days) to process the high resolution RAW files. That is the other way to produce satisfying renderings of reality…the only way if you are going to display images on 4D and higher resolution displays and at print sizes, say over 24 inches. But even with the rich clean data of a big sensor, some kind of intelligent, intentional rendering of the image for display will always be required, whether it is done in-camera or after the fact.

Need visuals?

Green Heron at screen resolution. Sony HX400V
Green Heron at screen resolution. Sony HX400V

 

Detail at approximately 1 to 1.
Detail at approximately 1 to 1.

 

 

Detail at approximately 4 to 1.
Detail at approximately 4 to 1.

In the images above we have an example of pixel peeping. The top image is presented at screen resolution. By clicking on it you can view it at its full uploaded resolution of 2000×1500 pixels. You will see at anything up to that size (and considerably larger actually) the image looks great…excellent rendition of detail and color…certainly very satisfying. Until recently an HD computer monitor or LCD screen was 1900 pixels across, so this image would fill the screen. It would make a 10×8 inch print at 200 dpi…excellent quality.

The next image shows a small segment of the first blown up so that you can see each pixel. That would be the equivalent of full screen view on a monitor with a resolution of 5184×3888 (twice the resolution of highest resolution LCDs currently in production), or a print 25 inches wide. At that size the artifacts are just beginning to show. Still, from anything more than a foot away, the image on an HD screen or the 25 inch print would look amazingly detailed, smooth, and satisfying.

The final image is an even smaller segment of the first, now blown up to 4 to 1…four times full resolution. At this scale the artifacts are obvious…but it is the equivalent of a print 100 inches wide! Even at that scale, from more than 4 or 5 feet viewing distance, the image would still look almost as good as it does on at screen resolution. Don’t believe me. Next time you are in an airport, take a really close look at one of those wall sized images. 🙂

So, bottom line. The artifacts you see in Point and Shoot images when you pixel peep are necessary to the pleasing effect of the images viewed at normal viewing sizes. If you have opted for the convenience, the flexibility, the compactness of a Point and Shoot superzoom…just enjoy the results it is designed to produce. Do not pixel peep. The artifacts you do not see can not hurt you…and you will get full enjoyment out of the images you bring back…images that you would be unlikely to get with any other camera!

One thought on “Point and Shoot: reality “reproduced” or reality “rendered”?”

  1. Well done. There is a lot to think about. A few years ago I worked for Pentax when the Digital camera was introduced to the public. So much knowledge has expanded. For every door that Science and engineering opens there are 10 more doors behind every door that is opened. I wonder what the “judges” that scream to much artifact I will not post your image on my site would say to this thesis?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.