Sensor Technology "Generations"

Amin

Hall of Famer
In many forums, we've come to apply the term "generation" to refer to the each replacement of a sensor technology. It's a term that implies a significant leap in technology, and we've been trained by the camera manufacturers to think this is what is occurring. Obviously it is in their interest to have us believe that our current camera is hopelessly obsolete, in need of replacement by the next camera. Are they correct?

Before asking that question, one might ask "Do I actually need a better sensor than the one I have now?" "How often do I miss a shot or fail to get the image I want because of a lack of dynamic range or poor signal to noise characteristics at a desired ISO?" It's the old "I never needed to use any film faster than ISO 100" argument, and there may be some validity to that. Let's assume, for the sake of this post, that I do in fact need a better sensor than the one I have right now. The next question is, "How much better is that 'next generation' sensor?"

That's where the manufacturers get tricky with us. The other day, I was listening to the podcast This Week in Tech, and they were having an interesting discussion about how while hardware advances in computing occur exponentially (Moore's Law), software advances were less impressive. The example given was that smart phone hardware has become incredibly advanced, while smart phone operating systems remain, by comparison, immature.

As is the case with smart phones, the on-board processing chips in cameras see exponential gains in processing power over time. Those ever-more-powerful processors can be used to drive powerful in-camera software. Such software can be used to increase the apparent dynamic range of JPEG files through "underexpose and push" processes (eg, Canon Highlight Tone Priority, Olympus Shadow Adjustment Technology), which are sufficient to fool the dynamic range testing protocols of some of the more prominent technical review sites. That processing muscle coupled with refined software can also be used to apply increasingly sophisticated noise reduction and sharpening algorithms in camera, with resulting advances in the signal to noise performance at any given ISO.

To what extent do these JPEG processing advances comprise the "generational" gains promised by camera companies? Here's a typical example:

Canon statement regarding the S90/G11 compared to the G10 said:
Canon’s new Dual Anti-Noise System combines a high sensitivity 10.0 Megapixel image sensor with Canon’s enhanced DIGIC 4 image processing technology to increase image quality and greatly improve noise performance by up to 2 stops.

Similar statements have been made by just about all of the camera manufacturers.

When confronted with such a claim, the first thing to consider is that this could mean just about anything. Even a poor processor with poor software can slather on noise reduction at the expense of detail. Sadly, this is often what exactly what a company means by "improved noise performance". Let's assume though, that the manufacturer is truly speaking of a significant gain in detail relative to noise. Where does such an advance occur? Is it the sensor? The processing? To take this particular Canon example, how much of the advance owes to that "high sensitivity 10.0 Megapixel image sensor" and how much to that "enhanced DIGIC 4 image processing technology"?
 
Here's the DxOmark analysis of the S90/G11 sensor signal to noise performance compared to the earlier "generation" G10:

S90.png


At ISO 800, we're looking at about 1/3 of a stop of improvement for this new generation sensor, and that is typical of the advantage one gains in upgrading to the next generation of sensor. While manufacturers love to speak of 1-2 stop improvements, the improvements in the "usable" ISO range seen by those who shoot RAW almost never amount to more than half a stop.

Let's see how Canon's APS-C signal-to-noise performance improved between 2004 and 2009, a period during which they changed sensors numerous times (link to DxOmark analysis):

20D-7D.png


About half a stop of improvement at high ISO. Granted, they doubled the megapixel count over that period of time, but for a given print size, the noise performance has barely changed over the course of 5 years of sensor development.

Now let's see how four years of advances in sensor technology have enabled the latest APS-C sensors to catch up to an old 35mm frame camera (DxOmark data):

5D.png


As you can see, the 2005 5D holds a 2/3 stop advantage over its 2009 competitors. While this is less than the 1-1.3 stop advantage it might be expected to have based on sensor size, consider that if sensor generations really brought advances of 1-2 stops per "generation", the old 5D classic would be left in the dust by these up and comers.
 
Sensor technology has improved over time, and every time someone presents a proof that we've come to the limit of the "laws of physics", sensor technology seems move a little bit ahead. Over a sufficient period of time, the advances can be quite noticeable. For example, the 2009 Micro 4/3 sensors match the signal to noise performance of the 2003 APS-C sensors (DxOmark data):

10D.png


Though it seems modest, that is a real advance considering the relative difference in sensor size between those systems and the doubling of the megapixel count in the E-P1 relative to the 10D.

The bottom line, as I see it, is that advances in sensor technology occur gradually over time. From the standpoint of a RAW shooter, the change in sensor technology in going from one camera to its replacement is usually incremental rather than generational. Something to keep in mind the next time a manufacturer's exaggerated claims have us reaching for the credit card.
 
Really thoughtful post, Amin. I especially like the way you think your rational argument will have any impact on my GAS! ;)

As you mention, the more crucial thing to consider is the lack of standards for so many aspects of the photographic process. Whether it's VR stop gain, noise-reduction stop gain, or any other of myriad claims made by manufacturers, it's always best to consider the actual RAW or JPG shots from shooters before making those judgments. Overzealous marketing departments are largely responsible for the majority of the wild claims. And then, there is the matter of software aiding images to give them an extra boost before you even touch them. Some sharpen a bit more, lens correct, or punch up the color. It's become quite the 3-card monty game.

Then there are sensors that just flat out behave differently. CCD vs. CMOS vs. backlit vs. Foveon, etc. That's a whole nutha article. But considering the original question, "Do I actually need a better sensor than the one I have now?" the question is of course, YES! Perhaps it won't trounce last year's model, but the gains made in low light capability in the DSLR arena have changed the game there incredibly. You don't need flash almost ever if you don't want it. So I believe the bar has been raised for small sensors as well. Perhaps once consumers catch onto the Megapixel Myth, and I think they've already begun to do so, we'll be able to achieve less-noisy, low-light small sensors as well.
 
Thanks. Underneath. We've seen a couple companies hold their ground or reduce MP now, eg LX2 to LX3 and G10 to G11. Even there, the gains in low light performance have been quite modest. Those modest increments add up, though.
 
I think many users are catching on to the megapixel myth. Excelent write up Amin.

In my line of work, its about perception. The customer doesn't really care if its in the hardware or in the software. They only care about the current thing they want to observe or use, while other things are out there in the background, they want the action they are currently doing to be the fastest and most responsive.

This is why I think camera companies are going more toward software lens corrections. Improved dynamic range through software..etc. Of course many of these same things can be done if we capture to RAW. But the majority don't use RAW. The average user uses JPEG. The software in camera has direct access to the RAW file, and to perform advanced algorithms on the RAW image takes quality software and a fast processor.

The advantage of RAW is that advanced algorithms can be run on a PC much faster than they can be done in camera. A dual core 2Ghz+ PC will do a much better job with advanced software of getting a cleaner image with more dynamic range than in camera. Cameras have limited ram, limited processor cycles, and limited algorithms. If we face the facts though, when customers have the choice between convienience and superb image quality, convienience will win most of the time, and image quality will suffer.
 
Back
Top