An A/D is, if you don't remember, an Analog to Digital converter. The part of your camera that turns your pixel signal into bits and bytes.
In a typical camera you have:
1: A sensor--for this post, one with 10.000.000 pixels (10MP) that collect the photoelectrons light knocks out of the silicon that the sensor is made from.
2--the sensor readout electronics--depending on the number of photoelectrons held in the pixel this produces a voltage that ranges from a few hundred micro volts (no light, just noise, black--black) to 1 volts (full to the top with photoelectrons, about to saturate and bloom, white-white)
3--the sensor post amplifier--this turns on when you change your ISO gain. At the lowest ISO (50, 100, or 200 depending on what ISO number the marketing folks decide will sell the most cameras this year) there is 1 to 1 amplification. This is the true and only ISO value. With film ISO differences are real. With digital it's just jack up the volume-- in steps. 1to1 at ISO 100 up to 16:1 at ISO 1600.
4--the A/D--without getting into the fine points of digital arithmetic a typical 12 bit A/D takes your post amplifier voltage and give it a number that ranges from 0 to 4195. This number take up (surprise, surprise) 12 bit on your memory cards. Which isn't much until you remember you have to store 10,000,000 of these byte and a half for each uncompressed RAW in your memory card
Over the relatively few years (7) since I first bought my first digital camera. I've handled or owned cameras where the A/D s have gone from 8 to 14 bits (x64), the megapixels from about 250,000 (cheap web cams) to 18,000,000 (my friend's 300D) (X72) and memory cards from 16,000,000 bytes (came with the first camera) to 8,000,000,000 (X500)
These improvements are not random--without all of them taken together we would be looking at serious problems in digiphoto land.
5--And finally all the digital stuff--hardware, firmware and software-- ultimately turns those A/D numbers into bright or dark pixels on your computer monitor or wide screen TV
First little secret you won't find in your camera manual. Every digital sensor ever made has a RAW mode. If they didn't camera engineers wouldn't be able to even start designing a camera. Whether or not you find RAW mode in the camera menu is another matter.
My first camera, an Olympus 3020 ($600), didn't have it in the menu. At first I didn't care since I didn't know RAW modes existed. Then I began reverse engineering my 3020, got weird blue sky noise numbers that were too low and went on to the forums to ask the experts what I was doing wrong. The experts had a stack of reasons why my noise readings could be too high, but no one had a convincing argument why they was so low. Except that I wasn't using my nonexistent RAW mode. something they claimed gave accurate noise numbers.
Since buying a new and expensive camera with RAW mode to settle an Internet argument wasn't in the budget. So I worked out a method of correcting my jpeg numbers. That procedure brought my measured dark shadow noise numbers much closer to theory but didn't do anything to explain my too-good-to-be-true blue sky noise numbers.
My reverse engineering project would have ended on that mystery if six month later I had run across a posting about a Russian hacker who'd worked out the procedure and written the DOS program need to unlock RAW on the 3020. With baited breath and some expectation, I redid my test images only to get numbers that closely matched my corrected jpeg numbers.
Using RAW was not the magic solution although it was satisfying to see jpeg fudge factors were correct. Would still be a mystery if I hadn't discovered the 3020 sensor spec sheet which explained all. Be worth another blog posting once I find the hard disk for the computer I was using then and rig it up so I can take off my copy of the data sheet. A Japanese version might be still around but the English version disappeared from the sensor manufacturer's (Sony) website years ago.
Back to this posting. Since my 12 bit A/D has 4000 levels (4196 to be exact but lets keep the math easy). If I take a perfectly exposed shot where the pixels of the brightest highlights has 64000 photoelectrons in them--the capacity of our imaginary camera--my voltage at the A/D is 1 volt and my output is 4000 levels. To fit everything in I must assign 16 photoelectrons to each level.
If I didn't have a sensor post amp, and if I upped the stutter speed to underexposed a stop (how the sensor sees ISO 200) I would end up with 1/2 volts and be using only 2000 of the 4000 A/D levels. And so on until at ISO 1600 I have 1/16 of a volt and 256 levels.
At first glance this may look may look OK. Monitors have 256 levels so each display level gets its own photoelectron- a good fit.
Don't work out that way. Everything starting with the demosaicing firmware in the camera that calculates the red, green and blue channel for the colors on out to the noise reduction routine in the RAW converter needs far more levels to make their digital calculations accurately. Remembers beyond the A/D your signal is only bits and bytes and everything now is accurate calculation.
How much more accurate? When I want to end up with a truly polished image I work with 16 bit arithmetic when I do the RAW conversions.
So that is what ISO does. It fills more levels in the A/D so the rest of the system, in camera and out of camera, has the data needed to do its calculation. No less. And no more.
Thursday, November 19, 2009
Wednesday, November 18, 2009
I often do a google search when I'm writing blog posts. Usually it's to check a fact, formula or site html. But sometimes I hit on something new that causes me revise what I plan to post.
Since decision time is galloping closer--the Young Shakespeare Player's dress rehearsals start on Friday--I must decide how best to photograph them. It's the last good time to show up with a camera. Then it's time off for the Thanksgiving weekend followed by two weekends of performances before the Julius Caesar cast disbands.
But instead of posting test shots of real people shot at a show opening as I promised I'll be hitting you with more posts on theory. This time I discovered a new site--http://theory.uchicago.edu/~ejm/pix/20d/tests/noise/index.html
If the sight of a mathematical formula immediately sends you off to find a celebrity website you may want to skip this one. But if you are mildly mathematically inclined like me the site has the best explanation of the intricacies of camera noise I've found so far. It confirmed some of my suspicions, explained some of the mysteries I've worked on and set me straight on some matters I've gotten flat out wrong.
Like the number of photoelectrons a sensor can hold. My rule of thumb of 1200 photo electrons per square micron of pixel is too small. That number still fits the small sensors I've tested before. But with larger and better made sensors such as the one in my D60 there is room for far more photoelectrons and far more S/N.
Not that I won't be blogging about the show. It was put on by a group of collectors of found photographs-- antique or just plain old photos you find in flea markets or garage sales.
At the opening the speaker was a well known collector of folk art from St Louis. His talk was on the cream of his photo collection--the part that has been on display in a number of art museums. Afterwards he asked me to send him some of the photos I took during his talk. Another reason to work out how to best clean up low light images.
So far I've been concentration on how good a S/N I can obtain from the D60. I've been ignoring the other half of that question. How much S/N do I need?

The image of the Declaration of Independence provides some insight. (Click on it for a larger image.)
It was manufactured by taking a well exposed image and superimposing a gradient of Gaussian noise on top. The S/N varies from less than one on the left to eight on the right. From it you can see you don't need as much S/N to bring out the fine detail as you might have thought.
Since decision time is galloping closer--the Young Shakespeare Player's dress rehearsals start on Friday--I must decide how best to photograph them. It's the last good time to show up with a camera. Then it's time off for the Thanksgiving weekend followed by two weekends of performances before the Julius Caesar cast disbands.
But instead of posting test shots of real people shot at a show opening as I promised I'll be hitting you with more posts on theory. This time I discovered a new site--http://theory.uchicago.edu/~ejm/pix/20d/tests/noise/index.html
If the sight of a mathematical formula immediately sends you off to find a celebrity website you may want to skip this one. But if you are mildly mathematically inclined like me the site has the best explanation of the intricacies of camera noise I've found so far. It confirmed some of my suspicions, explained some of the mysteries I've worked on and set me straight on some matters I've gotten flat out wrong.
Like the number of photoelectrons a sensor can hold. My rule of thumb of 1200 photo electrons per square micron of pixel is too small. That number still fits the small sensors I've tested before. But with larger and better made sensors such as the one in my D60 there is room for far more photoelectrons and far more S/N.
Not that I won't be blogging about the show. It was put on by a group of collectors of found photographs-- antique or just plain old photos you find in flea markets or garage sales.
At the opening the speaker was a well known collector of folk art from St Louis. His talk was on the cream of his photo collection--the part that has been on display in a number of art museums. Afterwards he asked me to send him some of the photos I took during his talk. Another reason to work out how to best clean up low light images.
So far I've been concentration on how good a S/N I can obtain from the D60. I've been ignoring the other half of that question. How much S/N do I need?

The image of the Declaration of Independence provides some insight. (Click on it for a larger image.)
It was manufactured by taking a well exposed image and superimposing a gradient of Gaussian noise on top. The S/N varies from less than one on the left to eight on the right. From it you can see you don't need as much S/N to bring out the fine detail as you might have thought.
Friday, November 13, 2009
Six years ago I discovered both the challenges of reverse engineering a digital camera to discover how it was made and the Internet photography forums where you could enlighten the world about what you discovered. Or thought you discovered. The Internet was just taking off. The few photo forums that were around then were full of discussions, spirited discussions and outright flame wars. A wild and sometimes informative time.
I fell into a polite disagreement with someone about dynamic range or noise or Ansel Adam's zone system or all three--I don't remember the details. To prove my point I decided I needed to experiment. With a series of photographs of an accurately printed zone system chart and some Photoshop magic, I would win the next round of discussions and establish myself as a photography guru to reckon with. (Naivete, thy name is Internet Newbie)
To accomplished this impossible dream I called around to the local camera stores. Only the Camera Company had anything close to what I wanted. For a mere $160 + dollars I could buy a calibrated 21 zone Kodak photographic step tablet no. 2.
My reply was "You gotta be kidding. There must be anything cheaper. I need this to settle an argument in an Internet forum."
Turns out they had the step tablet in stock because a grad student had special ordered it and then never came back to buy it. Since some money was better than no money for something that had been sitting around for years, the owner decided that if I came up with $25 the tablet was mine.
$25 was more then I wanted to spend, but...hey, who else but a true Internet guru would own a calibrated Kodak 21 zone step tablet no 2. If I could slip that fact into my postings it would add a touch of cachet. Didn't' work out that way but over the years I've wasted many hours playing with the step tablet, so I must have gotten my money's worth.
This is my latest setup
The step gauge consists of 21 neutral density filters printed out on a transparent strip. Their optical density ranges from 0.05, almost transparent, to 3.0, 1/1000 transmitting. To use it, I tape it to the black cardboard holder. That slips into the box in the lower picture. For a source, the white foamboard is lit from outside to make a diffuse and evenly illuminated background.
With the camera on the tripod I drape the black T-shirt over it as a drop cloth. Any stray light overwhelms the transmitted light of the more optically dense strips. This shows up as an offset in the imageJ graph where the low transmitting strips aren't close to zero .
Then I set the camera in manual mode and adjust the exposure so the first few zones are over exposed. Then it's a simple matter to increase both the ISO and the shutter speed to take a series of noise profiles with a constant exposure
For the record you don't need to use this or any other tablet or chart to do the experiment. You can take photos of a white card or wall at various exposures to make them as dark or light as you want. The tablet is convenient. And it along with ImageJ makes neat charts for the blog.
I you want to do the experiment you will need one more free program, ufraw. It's the raw converter that come with GIMP, the free version of Photoshop from the Linux people. Or you can download a stand alone version from here. http://ufraw.sourceforge.net/Install.html
It supports far more versions of RAW than the commercial RAW converters including the CHDK hacked versions. With its latest reincarnation, its graphic interface is easier to use than it used to be. Still doesn't do batch conversions yet, but I'm not complaining. It's free and also the only RAW converter I've found that does linear RAW conversions
What so important about that? In the last post I mentioned that once a sensor's data was turned into bits and bytes, there were many software tricks that camera folks could do to hide and mask the true noise. The most common is gamma conversion. It's important and usually necessary but it completely changes how the image and its noise looks.
With a glance, you can see the difference between the two noise profiles. The image in the center is lighter with a greater dynamic range- a clear advantage over the darker image on the far right.
The advantage shifts when you compare the two graphs. The noise is lower in the top graph, the noise profile of the darker image. The noise also decreases as the steps become darker.With the lower graph from the middle image the noise becomes greater as the steps darken
So which is better. Less noise with less dynamic range. Or the other way around.
Neither. Both graphs are from the same RAW file, one taken at ISO 800 with my friend's Canon 5D--one of the lowest noise camera around. The only difference was how they were processed by the ufRAW converter. The darker image is a linear image with no gamma correction. The lighter one has a gamma correction of 2.2.
The linear noise profile is how the sensor sees the world. Close down the lens a stop and you have half the light and half the number of photoelectons. This creates half the voltage for the A/D. (Analog to Digital converter, the hunk of electronics in the camera that turns the sensor signal into bits and bytes.) That's the definition of linear. Double or half what you put it; double or half what go get out.
Gamma correction is non-linear. Why is that important? Your eye-brain system is non-linear too. Your night vision and response to low light is much better than your daylight vision. Microsoft thinks a gamma of 2.2 is the correct correction. Apple says 1.8. Your real gamma as you read this depends on your eyesight, lighting conditions and what you had for breakfast this morning.
Since photon shot noise is in the light, the less the light the less the noise. That's what you see in the linear graph. With a gamma correction you are brightening the darker steps. Another way to look at it is you are amplifying your sensor signal with software just as you do with hardware when you set the camera to a higher ISO setting.
This amplifies the noise. It also amplifies the signal an equal amount. So the S/N ratio is the same.
It's the S/N ratio that has meaning in an image. Not the noise alone. The distinction is important. While this may sound like a quibble, if you don't distinguish between the two, the noise alone can lead you astray.
How far astray. As an example--this is what happened when we compared the 7D, 5D, and my D60 on Friday.
With photon shot noise, the measurement followed theory closely.
At ISO 800 the full frame 5D had a S/N of of 100 when it's sensor was just about to saturate . It had collected 100,000 photoelectrons in its 72 micron square pixel. My D60 had a S/N of 66 with it smaller 1.5 crop sensor. And the D7 with its 18,000,000 pixels jammed into a slightly smaller 1.6 crop sensor had S/N of 57.
No surprises here. With photon shot noise the cameras behaved just as theory predicted.
When it came to true camera noise, the noise at the bottom of the graph where there is almost no light, the results were different. My D60's noise was identical to his 5D's noise which delighted and surprised me. My friend's brand new 7D looked to be twice as noisy as the other two cameras. something that didn't make him grin wildly.
After a closer look at the data on Saturday morning, I called my friend with better news. For reasons I haven't worked out yet, the data from the two Canon cameras wasn't completely linear. This amplified their noise enough to skew their numbers.
With the corrections, the 5D is the quietest of the three cameras, the 7D is a close second and my D60 is about twice as noisy as the other two.
A mild disappointment, but not a surprising one. The Canon CMOS sensors have electronics built into each sensor to control and reduce the noise. That explains their factor of two noise advantage.
And that doesn't mean my D60 is a bad camera. According to the astrophotography web sites where they really worry and know about noise, the 5D's real camera noise is equivalent to 3-5 photoelectrons. So with the high estimate of 10 photo electrons in my D60, I need to collect only 100 photoelectrons in an exposure for the photon shot noise to equal the camera noise.
Be nice to own a full frame camera, but then we are talking big bucks for both the camera body and the lenses big enough to cover a full frame sensor. I can live with what I have.
So my next post will feature real pictures where I push my camera, lenses and noise reduction programs as far as they can conveniently go. It's the questions that prompted these posts on the theory and practice of camera noise.
I fell into a polite disagreement with someone about dynamic range or noise or Ansel Adam's zone system or all three--I don't remember the details. To prove my point I decided I needed to experiment. With a series of photographs of an accurately printed zone system chart and some Photoshop magic, I would win the next round of discussions and establish myself as a photography guru to reckon with. (Naivete, thy name is Internet Newbie)
To accomplished this impossible dream I called around to the local camera stores. Only the Camera Company had anything close to what I wanted. For a mere $160 + dollars I could buy a calibrated 21 zone Kodak photographic step tablet no. 2.
My reply was "You gotta be kidding. There must be anything cheaper. I need this to settle an argument in an Internet forum."
Turns out they had the step tablet in stock because a grad student had special ordered it and then never came back to buy it. Since some money was better than no money for something that had been sitting around for years, the owner decided that if I came up with $25 the tablet was mine.
$25 was more then I wanted to spend, but...hey, who else but a true Internet guru would own a calibrated Kodak 21 zone step tablet no 2. If I could slip that fact into my postings it would add a touch of cachet. Didn't' work out that way but over the years I've wasted many hours playing with the step tablet, so I must have gotten my money's worth.
This is my latest setup

With the camera on the tripod I drape the black T-shirt over it as a drop cloth. Any stray light overwhelms the transmitted light of the more optically dense strips. This shows up as an offset in the imageJ graph where the low transmitting strips aren't close to zero .

For the record you don't need to use this or any other tablet or chart to do the experiment. You can take photos of a white card or wall at various exposures to make them as dark or light as you want. The tablet is convenient. And it along with ImageJ makes neat charts for the blog.
I you want to do the experiment you will need one more free program, ufraw. It's the raw converter that come with GIMP, the free version of Photoshop from the Linux people. Or you can download a stand alone version from here. http://ufraw.sourceforge.net/Install.html
It supports far more versions of RAW than the commercial RAW converters including the CHDK hacked versions. With its latest reincarnation, its graphic interface is easier to use than it used to be. Still doesn't do batch conversions yet, but I'm not complaining. It's free and also the only RAW converter I've found that does linear RAW conversions
What so important about that? In the last post I mentioned that once a sensor's data was turned into bits and bytes, there were many software tricks that camera folks could do to hide and mask the true noise. The most common is gamma conversion. It's important and usually necessary but it completely changes how the image and its noise looks.
The advantage shifts when you compare the two graphs. The noise is lower in the top graph, the noise profile of the darker image. The noise also decreases as the steps become darker.With the lower graph from the middle image the noise becomes greater as the steps darken
So which is better. Less noise with less dynamic range. Or the other way around.
Neither. Both graphs are from the same RAW file, one taken at ISO 800 with my friend's Canon 5D--one of the lowest noise camera around. The only difference was how they were processed by the ufRAW converter. The darker image is a linear image with no gamma correction. The lighter one has a gamma correction of 2.2.
The linear noise profile is how the sensor sees the world. Close down the lens a stop and you have half the light and half the number of photoelectons. This creates half the voltage for the A/D. (Analog to Digital converter, the hunk of electronics in the camera that turns the sensor signal into bits and bytes.) That's the definition of linear. Double or half what you put it; double or half what go get out.
Gamma correction is non-linear. Why is that important? Your eye-brain system is non-linear too. Your night vision and response to low light is much better than your daylight vision. Microsoft thinks a gamma of 2.2 is the correct correction. Apple says 1.8. Your real gamma as you read this depends on your eyesight, lighting conditions and what you had for breakfast this morning.
Since photon shot noise is in the light, the less the light the less the noise. That's what you see in the linear graph. With a gamma correction you are brightening the darker steps. Another way to look at it is you are amplifying your sensor signal with software just as you do with hardware when you set the camera to a higher ISO setting.
This amplifies the noise. It also amplifies the signal an equal amount. So the S/N ratio is the same.
It's the S/N ratio that has meaning in an image. Not the noise alone. The distinction is important. While this may sound like a quibble, if you don't distinguish between the two, the noise alone can lead you astray.
How far astray. As an example--this is what happened when we compared the 7D, 5D, and my D60 on Friday.
With photon shot noise, the measurement followed theory closely.
At ISO 800 the full frame 5D had a S/N of of 100 when it's sensor was just about to saturate . It had collected 100,000 photoelectrons in its 72 micron square pixel. My D60 had a S/N of 66 with it smaller 1.5 crop sensor. And the D7 with its 18,000,000 pixels jammed into a slightly smaller 1.6 crop sensor had S/N of 57.
No surprises here. With photon shot noise the cameras behaved just as theory predicted.
When it came to true camera noise, the noise at the bottom of the graph where there is almost no light, the results were different. My D60's noise was identical to his 5D's noise which delighted and surprised me. My friend's brand new 7D looked to be twice as noisy as the other two cameras. something that didn't make him grin wildly.
After a closer look at the data on Saturday morning, I called my friend with better news. For reasons I haven't worked out yet, the data from the two Canon cameras wasn't completely linear. This amplified their noise enough to skew their numbers.
With the corrections, the 5D is the quietest of the three cameras, the 7D is a close second and my D60 is about twice as noisy as the other two.
A mild disappointment, but not a surprising one. The Canon CMOS sensors have electronics built into each sensor to control and reduce the noise. That explains their factor of two noise advantage.
And that doesn't mean my D60 is a bad camera. According to the astrophotography web sites where they really worry and know about noise, the 5D's real camera noise is equivalent to 3-5 photoelectrons. So with the high estimate of 10 photo electrons in my D60, I need to collect only 100 photoelectrons in an exposure for the photon shot noise to equal the camera noise.
Be nice to own a full frame camera, but then we are talking big bucks for both the camera body and the lenses big enough to cover a full frame sensor. I can live with what I have.
So my next post will feature real pictures where I push my camera, lenses and noise reduction programs as far as they can conveniently go. It's the questions that prompted these posts on the theory and practice of camera noise.
Subscribe to:
Posts (Atom)