Thursday, November 19, 2009

Let's talk about A/Ds and ISO

An A/D is, if you don't remember, an Analog to Digital converter. The part of your camera that turns your pixel signal into bits and bytes.

In a typical camera you have:

1: A sensor--for this post, one with 10.000.000 pixels (10MP) that collect the photoelectrons light knocks out of the silicon that the sensor is made from.

2--the sensor readout electronics--depending on the number of photoelectrons held in the pixel this produces a voltage that ranges from a few hundred micro volts (no light, just noise, black--black) to 1 volts (full to the top with photoelectrons, about to saturate and bloom, white-white)

3--the sensor post amplifier--this turns on when you change your ISO gain. At the lowest ISO (50, 100, or 200 depending on what ISO number the marketing folks decide will sell the most cameras this year) there is 1 to 1 amplification. This is the true and only ISO value. With film ISO differences are real. With digital it's just jack up the volume-- in steps. 1to1 at ISO 100 up to 16:1 at ISO 1600.

4--the A/D--without getting into the fine points of digital arithmetic a typical 12 bit A/D takes your post amplifier voltage and give it a number that ranges from 0 to 4195. This number take up (surprise, surprise) 12 bit on your memory cards. Which isn't much until you remember you have to store 10,000,000 of these byte and a half for each uncompressed RAW in your memory card

Over the relatively few years (7) since I first bought my first digital camera. I've handled or owned cameras where the A/D s have gone from 8 to 14 bits (x64), the megapixels from about 250,000 (cheap web cams) to 18,000,000 (my friend's 300D) (X72) and memory cards from 16,000,000 bytes (came with the first camera) to 8,000,000,000 (X500)

These improvements are not random--without all of them taken together we would be looking at serious problems in digiphoto land.

5--And finally all the digital stuff--hardware, firmware and software-- ultimately turns those A/D numbers into bright or dark pixels on your computer monitor or wide screen TV

First little secret you won't find in your camera manual. Every digital sensor ever made has a RAW mode. If they didn't camera engineers wouldn't be able to even start designing a camera. Whether or not you find RAW mode in the camera menu is another matter.

My first camera, an Olympus 3020 ($600), didn't have it in the menu. At first I didn't care since I didn't know RAW modes existed. Then I began reverse engineering my 3020, got weird blue sky noise numbers that were too low and went on to the forums to ask the experts what I was doing wrong. The experts had a stack of reasons why my noise readings could be too high, but no one had a convincing argument why they was so low. Except that I wasn't using my nonexistent RAW mode. something they claimed gave accurate noise numbers.

Since buying a new and expensive camera with RAW mode to settle an Internet argument wasn't in the budget. So I worked out a method of correcting my jpeg numbers. That procedure brought my measured dark shadow noise numbers much closer to theory but didn't do anything to explain my too-good-to-be-true blue sky noise numbers.

My reverse engineering project would have ended on that mystery if six month later I had run across a posting about a Russian hacker who'd worked out the procedure and written the DOS program need to unlock RAW on the 3020. With baited breath and some expectation, I redid my test images only to get numbers that closely matched my corrected jpeg numbers.

Using RAW was not the magic solution although it was satisfying to see jpeg fudge factors were correct. Would still be a mystery if I hadn't discovered the 3020 sensor spec sheet which explained all. Be worth another blog posting once I find the hard disk for the computer I was using then and rig it up so I can take off my copy of the data sheet. A Japanese version might be still around but the English version disappeared from the sensor manufacturer's (Sony) website years ago.

Back to this posting. Since my 12 bit A/D has 4000 levels (4196 to be exact but lets keep the math easy). If I take a perfectly exposed shot where the pixels of the brightest highlights has 64000 photoelectrons in them--the capacity of our imaginary camera--my voltage at the A/D is 1 volt and my output is 4000 levels. To fit everything in I must assign 16 photoelectrons to each level.

If I didn't have a sensor post amp, and if I upped the stutter speed to underexposed a stop (how the sensor sees ISO 200) I would end up with 1/2 volts and be using only 2000 of the 4000 A/D levels. And so on until at ISO 1600 I have 1/16 of a volt and 256 levels.

At first glance this may look may look OK. Monitors have 256 levels so each display level gets its own photoelectron- a good fit.

Don't work out that way. Everything starting with the demosaicing firmware in the camera that calculates the red, green and blue channel for the colors on out to the noise reduction routine in the RAW converter needs far more levels to make their digital calculations accurately. Remembers beyond the A/D your signal is only bits and bytes and everything now is accurate calculation.

How much more accurate? When I want to end up with a truly polished image I work with 16 bit arithmetic when I do the RAW conversions.

So that is what ISO does. It fills more levels in the A/D so the rest of the system, in camera and out of camera, has the data needed to do its calculation. No less. And no more.

Wednesday, November 18, 2009

I often do a google search when I'm writing blog posts. Usually it's to check a fact, formula or site html. But sometimes I hit on something new that causes me revise what I plan to post.

Since decision time is galloping closer--the Young Shakespeare Player's dress rehearsals start on Friday--I must decide how best to photograph them. It's the last good time to show up with a camera. Then it's time off for the Thanksgiving weekend followed by two weekends of performances before the Julius Caesar cast disbands.

But instead of posting test shots of real people shot at a show opening as I promised I'll be hitting you with more posts on theory. This time I discovered a new site--http://theory.uchicago.edu/~ejm/pix/20d/tests/noise/index.html

If the sight of a mathematical formula immediately sends you off to find a celebrity website you may want to skip this one. But if you are mildly mathematically inclined like me the site has the best explanation of the intricacies of camera noise I've found so far. It confirmed some of my suspicions, explained some of the mysteries I've worked on and set me straight on some matters I've gotten flat out wrong.

Like the number of photoelectrons a sensor can hold. My rule of thumb of 1200 photo electrons per square micron of pixel is too small. That number still fits the small sensors I've tested before. But with larger and better made sensors such as the one in my D60 there is room for far more photoelectrons and far more S/N.

Not that I won't be blogging about the show. It was put on by a group of collectors of found photographs-- antique or just plain old photos you find in flea markets or garage sales.

At the opening the speaker was a well known collector of folk art from St Louis. His talk was on the cream of his photo collection--the part that has been on display in a number of art museums. Afterwards he asked me to send him some of the photos I took during his talk. Another reason to work out how to best clean up low light images.

So far I've been concentration on how good a S/N I can obtain from the D60. I've been ignoring the other half of that question. How much S/N do I need?


The image of the Declaration of Independence provides some insight. (Click on it for a larger image.)

It was manufactured by taking a well exposed image and superimposing a gradient of Gaussian noise on top. The S/N varies from less than one on the left to eight on the right. From it you can see you don't need as much S/N to bring out the fine detail as you might have thought.

Friday, November 13, 2009

Six years ago I discovered both the challenges of reverse engineering a digital camera to discover how it was made and the Internet photography forums where you could enlighten the world about what you discovered. Or thought you discovered. The Internet was just taking off. The few photo forums that were around then were full of discussions, spirited discussions and outright flame wars. A wild and sometimes informative time.

I fell into a polite disagreement with someone about dynamic range or noise or Ansel Adam's zone system or all three--I don't remember the details. To prove my point I decided I needed to experiment. With a series of photographs of an accurately printed zone system chart and some Photoshop magic, I would win the next round of discussions and establish myself as a photography guru to reckon with. (Naivete, thy name is Internet Newbie)

To accomplished this impossible dream I called around to the local camera stores. Only the Camera Company had anything close to what I wanted. For a mere $160 + dollars I could buy a calibrated 21 zone Kodak photographic step tablet no. 2.

My reply was "You gotta be kidding. There must be anything cheaper. I need this to settle an argument in an Internet forum."

Turns out they had the step tablet in stock because a grad student had special ordered it and then never came back to buy it. Since some money was better than no money for something that had been sitting around for years, the owner decided that if I came up with $25 the tablet was mine.

$25 was more then I wanted to spend, but...hey, who else but a true Internet guru would own a calibrated Kodak 21 zone step tablet no 2. If I could slip that fact into my postings it would add a touch of cachet. Didn't' work out that way but over the years I've wasted many hours playing with the step tablet, so I must have gotten my money's worth.

This is my latest setup


The step gauge consists of 21 neutral density filters printed out on a transparent strip. Their optical density ranges from 0.05, almost transparent, to 3.0, 1/1000 transmitting. To use it, I tape it to the black cardboard holder. That slips into the box in the lower picture. For a source, the white foamboard is lit from outside to make a diffuse and evenly illuminated background.


With the camera on the tripod I drape the black T-shirt over it as a drop cloth. Any stray light overwhelms the transmitted light of the more optically dense strips. This shows up as an offset in the imageJ graph where the low transmitting strips aren't close to zero .

Then I set the camera in manual mode and adjust the exposure so the first few zones are over exposed. Then it's a simple matter to increase both the ISO and the shutter speed to take a series of noise profiles with a constant exposure

For the record you don't need to use this or any other tablet or chart to do the experiment. You can take photos of a white card or wall at various exposures to make them as dark or light as you want. The tablet is convenient. And it along with ImageJ makes neat charts for the blog.

I you want to do the experiment you will need one more free program, ufraw. It's the raw converter that come with GIMP, the free version of Photoshop from the Linux people. Or you can download a stand alone version from here. http://ufraw.sourceforge.net/Install.html

It supports far more versions of RAW than the commercial RAW converters including the CHDK hacked versions. With its latest reincarnation, its graphic interface is easier to use than it used to be. Still doesn't do batch conversions yet, but I'm not complaining. It's free and also the only RAW converter I've found that does linear RAW conversions

What so important about that? In the last post I mentioned that once a sensor's data was turned into bits and bytes, there were many software tricks that camera folks could do to hide and mask the true noise. The most common is gamma conversion. It's important and usually necessary but it completely changes how the image and its noise looks.

With a glance, you can see the difference between the two noise profiles. The image in the center is lighter with a greater dynamic range- a clear advantage over the darker image on the far right.

The advantage shifts when you compare the two graphs. The noise is lower in the top graph, the noise profile of the darker image. The noise also decreases as the steps become darker.With the lower graph from the middle image the noise becomes greater as the steps darken

So which is better. Less noise with less dynamic range. Or the other way around.

Neither. Both graphs are from the same RAW file, one taken at ISO 800 with my friend's Canon 5D--one of the lowest noise camera around. The only difference was how they were processed by the ufRAW converter. The darker image is a linear image with no gamma correction. The lighter one has a gamma correction of 2.2.

The linear noise profile is how the sensor sees the world. Close down the lens a stop and you have half the light and half the number of photoelectons. This creates half the voltage for the A/D. (Analog to Digital converter, the hunk of electronics in the camera that turns the sensor signal into bits and bytes.) That's the definition of linear. Double or half what you put it; double or half what go get out.

Gamma correction is non-linear. Why is that important? Your eye-brain system is non-linear too. Your night vision and response to low light is much better than your daylight vision. Microsoft thinks a gamma of 2.2 is the correct correction. Apple says 1.8. Your real gamma as you read this depends on your eyesight, lighting conditions and what you had for breakfast this morning.

Since photon shot noise is in the light, the less the light the less the noise. That's what you see in the linear graph. With a gamma correction you are brightening the darker steps. Another way to look at it is you are amplifying your sensor signal with software just as you do with hardware when you set the camera to a higher ISO setting.

This amplifies the noise. It also amplifies the signal an equal amount. So the S/N ratio is the same.

It's the S/N ratio that has meaning in an image. Not the noise alone. The distinction is important. While this may sound like a quibble, if you don't distinguish between the two, the noise alone can lead you astray.

How far astray. As an example--this is what happened when we compared the 7D, 5D, and my D60 on Friday.

With photon shot noise, the measurement followed theory closely.

At ISO 800 the full frame 5D had a S/N of of 100 when it's sensor was just about to saturate . It had collected 100,000 photoelectrons in its 72 micron square pixel. My D60 had a S/N of 66 with it smaller 1.5 crop sensor. And the D7 with its 18,000,000 pixels jammed into a slightly smaller 1.6 crop sensor had S/N of 57.

No surprises here. With photon shot noise the cameras behaved just as theory predicted.

When it came to true camera noise, the noise at the bottom of the graph where there is almost no light, the results were different. My D60's noise was identical to his 5D's noise which delighted and surprised me. My friend's brand new 7D looked to be twice as noisy as the other two cameras. something that didn't make him grin wildly.

After a closer look at the data on Saturday morning, I called my friend with better news. For reasons I haven't worked out yet, the data from the two Canon cameras wasn't completely linear. This amplified their noise enough to skew their numbers.

With the corrections, the 5D is the quietest of the three cameras, the 7D is a close second and my D60 is about twice as noisy as the other two.

A mild disappointment, but not a surprising one. The Canon CMOS sensors have electronics built into each sensor to control and reduce the noise. That explains their factor of two noise advantage.

And that doesn't mean my D60 is a bad camera. According to the astrophotography web sites where they really worry and know about noise, the 5D's real camera noise is equivalent to 3-5 photoelectrons. So with the high estimate of 10 photo electrons in my D60, I need to collect only 100 photoelectrons in an exposure for the photon shot noise to equal the camera noise.

Be nice to own a full frame camera, but then we are talking big bucks for both the camera body and the lenses big enough to cover a full frame sensor. I can live with what I have.

So my next post will feature real pictures where I push my camera, lenses and noise reduction programs as far as they can conveniently go. It's the questions that prompted these posts on the theory and practice of camera noise.

Tuesday, November 10, 2009

And More Noise

Where were we? Ah, yes. I was making a big deal about the noise being in the light not the camera. Like why should anyone care?

Because .... it's fun to know things like...

Since noise is caused be the random arrival of photons that slam into a pixel and knock out photoelectrons, it's statistical. Just like polling voters to see if Mike or Marsha is going to be our next dog-catcher. If Ms Politico Pollster says that Marsha is up by 2%, but her poll has a margin of error of 3.3%,. Mike is still in the running. The pollster called only 1000 voters. To get to a margin of error of 1%, she would need to call 10,000 voters. A bit many for a dog-catcher election.

Your accuracy (or signal to noise) equals N, the number of samples (voters called) divided by the square root of N (noise). Elementary statistics. If statistics is ever elementary

So if you collect a signal of 1000 photoelectrons your noise is 33 photoelectrons. That gives you a signal to noise (S/N) of 33.3 or a margin of error of 3%.

With ImageJ you can measure S/N accurately. Which brings up the too-good-to-be-true problem.

Since photon shot noise--the biggest source of noise in most images--is from the light not the camera there is nothing a camera manufacturer can do to reduce it in the sensor. But once the signal has been digitized and turned into bits and bytes, there are a multitude of software tricks they can use to hide the noise. Some trick are useful and make for better pictures. As for others--let's say some tricks can be overdone.

Big pixels have less noise because they can hold more photoelectrons. They also cost more to make, one of the reasons a new big sensor DSLR body costs from $500 up while a decent small sensor B&S complete with lens starts around $200. So why don't the camera manufactures make better small sensors to get around the noise problem?

Like everything else sensors have limitations. The photoclectrons are nothing more than a pile of static electricity. Same as the static electricity you collect in your finger if you shuffle your feet on the carpet and get zapped when you touch a door knob.

To keep the camera's static electricity inside a pixels there are wall of negative electricity created by the circuity that defines the pixel. If sensor designer tried use more voltage to hold in more photoelectrons they would create holes. I won't go into the solid state physics of holes except to say they are atom sized PacMen that wander around a pixel and gobble up photoelectrons as soon as they are created. Not exactly what anyone would want in their camera.

By my calculations--I've yet to find the value on the Internet-- a well designed pixel can hold up to 1200 photoelectrons per every square micron of silicon real estate. If you overexpose and create more photoelectrons than the camera can handle, it blooms. Blooming, if you don't know the term, is the cause of the big blob of white covering a street light in a night shot. Instead of creating an image with any detail the photoelectrons have overflown into nearby pixels.

My D60's pixels are 38 microns square. Knock off 10% for the circuitry that forms the pixel wall and they have an active area of 34 square microns. So they can hold about 40,000 photoelectrons. This gives a maximum S/N of 200.

When I did the measurements on my D60 yesterday, I'd hoped to calculate that number. A S/N of a 150 wouldn't have surprised me. If it had been lower that a 100, right now I would be emailing about warranty repair.

Instead I measured a S/N of 500. That requires 250,000 photo electrons and over 6 times more silicon than there's in the camera.

A no questions asked much too good to be true moment.

Next blog post. How I did the measurements. And why I suspect this excessive S/N is caused by a bug is Nikon's RAW compression routine.

Finally if you are a glutton for statistics, I recommend my favorite textbook.


to be continued:

Sunday, November 8, 2009

ImageJ--the step by step

1-Download imageJ from http://rsbweb.nih.gov/ij/. There are MAC, Linux, and Windows 32 and 64 bit versions.

2-Install it. With Vista install it in a directory like your documents or downloads. Then it will have full write permissions. It's a Java applet and works a little different that most windows programs.

3-Launch it. You will see the small Image J box. (For don't-want-rewrite-the-software reasons you can't resize it. On an old VGA screen it looked big.)


4-Click on edit-options-plot profile options. Out of the box, ImageJ autoscales. Since there are 256 gray levels (0 black to 255 white) set the minimum Y to 0 and the maximum Y to at least 255. I use a fixed scale if I'm making comparisons since it makes the differences more obvious.

If you are doing something else, set it any way you want. This box only effects how the plot will look. Click OK to set the scaling

5-Click on file and open an image.


6-Right Click on the line icon box (the fifth over). With version 1.42 you can now draw straight, freehand and segmented lines. Pick a type and drag a line in the image you opened.

7-Hit control+K or the MAC equivalent. Or go to analyse-plot profile in the menu bar. ImageJ will calculate and plot out the grey scale values under the line. If you drew it in an area where there is little detail except for noise you will have a noise profile.

8-Enjoy

Saturday, November 7, 2009

Fun with imageJ

This blog is a detour or more accurately a jump-ahead from what I'd planned. When I finished the last post I'd planned to talk more about noise theory, have my buddy bring over his cameras for the measurements, post what we discovered and finally talk about imageJ and lay out how anybody, including you, my dear readers, could do the same measurements with your camera(s).

But once my last post was out in the world and I couldn't find the Badger game on TV, I downloaded the latest and greatest version of ImageJ. Once I started to play around with it I discovered some neat things to share.

ImageJ is scientific image analyser. Scientists around the world use it to pull out the data buried in the their images. Then they write their research papers, give them at confrences and publish them in journals that only scientists can understand.

Buried in my images is data on how good the Lightroom noise reduction routine is. From it I can settle which lenses to use for the YSP dress rehersals photos. Then I'll pass the pictures around in person or by email before I publish a few in this blog or on flickr. Same workflow as a research project--just more informal.

Everybody has a research project. ImageJ can be your friend. It comes free from the NIH, the National Institute of Health. So Google and download it. It's fun to play with.

In the 'What is Noise' post I talked about pixels being like penny jars. And how you filled them with photoelectrons (the pennies). And how you could measure your camera noise once you took an image of uniformly illuminated white card.

All true. But I didn't have to do all that.

Using imageJ, I loaded the blowup of Laura's head with no noise reduction. Then I dragged the yellow line down a black area. (Click the image to see it large.) With a control-K, imageJ created a noise profile from the 600 plus pixels under the line and plotted the gray scale values for me.

Gray scale images have 256 tones--255 is pure white and 0 is jet black. From these values you can calculate back to learn the number of photoelectrons in a pixel. But for this experiment I didn't need to do the calculations.

Instead I did the identical thing with the blowup of Laura after noise reduction. One glance and you can see that the noise reduction routine works. Fairly well too.

The next step is to do more comparisons to see which noise reduction programs--I have several--work the best

Once again imageJ has added new features since the last time I downloaded it. I find the free hand line profile neat and useful. In the third image, I've measured the dark background, Laura's hair and her cheek. In the hair, the noise and hair texture is mixed together. Her cheek, however, is smooth but not evenly illuminated.

The noise from the cheek is riding on the downward slope of the graph and is circled in blue. In this jpeg image the cheek noise is a good deal less than the black backgound noise. Why is a subject for another post. It's more complicated than you might think.

What is noise?

What is noise? Thought you would never ask.

Here are a few facts about noise you won't find in a camera's hype sheet. Or on the review sites either. While things have gotten better--well regarded review sites like dpreview aren't pontificating absurdities about camera noise like they were a few years ago--but there is still much confusion.

All the facts I'm listing are out on the web somewhere-either in plain language or more often hiding in mathematical formulas. But I think it would be interesting to pull them together in one place. In more or less plain language,

Fact one.

Unlike film which works in a totally different way, digital cameras count photons. Photons are little hunks of light that work like bullets and knock photoelectrons out of the silicon that camera sensors are made out of and into the pixels that sit on the top of the silicon.

(Einstein won his Nobel Prize for working out how this works. He received the honor. His first wife collected the money. It was written in their divorce settlement.)

Fact two

Pixels work like penny banks. If you take a picture of a uniformly illuminated source--a white box by the window lit by a clear sky for instance--all the pixels will collect their photoelectrons, the pennies, during the exposure. Then if you add up all the photoelectrons and divide by the number of pixels in the camera you have your signal which tells you how bright it is outside. To make the math easy, today the signal is 1000 photoelectons. ("pe-" in engineering talk.)

If you empty a penny bank and count the number of pennies that are short or over 1000, that is the noise. For this example lets say you have 33 extra pe- in a pixel--a magic number I will explain in the next post.

Of all the ways to explain and quantify noise, the number of noise photoelctions is the easiest to work with. So we won't get into decibels, the noise numbers loved by electrical engineers. Today your noise is 33 pe- and your signal to noise (S/N) is 33. That's nothing to brag about but it's still a useable S\N

Fact three

The vast majority of the noise you counted is not from your camera. It from the light. I repeat. THE NOISE IS FROM THE LIGHT!

The noise is caused by the random emission of photons from anything that is hot enough to give off light--that means everything in our universe. Sun, flash lamp, candle, your big toe, puddle of liquid air, everything. The amount of light and spectra of the light will vary of course. With a medical tomographic camera your big toe becomes a bright source, but taking off your shoes won't help any if you are shooting a wedding. Still, regardless of where the light comes from, it carries its noise along with it.

What does this mean. Nothing a camera maker has done or ever will do can get rid of this noise--photon shot noise in engineering talk. Short of a trip to a sf alternate universe, the noise is not going to go away.

And if you noticed that I said "vast majority", what are the real numbers. By my calculations, if you own a Canon 5D up to 98 % of the noise comes from the light in a low ISO and bright exposure. And if you don't, with my carry-it-everywhere Oly SP350 up to 95% of the noise come from the light. Something I measured.

If you now think Old Scrib is sprouting total nonsense--his cheapo old tiny sensor SP350 doesn't take clean pictures like a Canon 5D--we'll explore the differences between noise and signal to noise in more detail in the next blog. If you want to understand what's going on inside your camera confusing the two terms can causes much confusion.

We will also get into how you can, with free software and not that much effort, measure how noisy or clean your camera is. A buddy of mine just bought a Canon 7D--their latest that's been on the market for only a few weeks. Next week he's bring over his older 5D and 7D and we will measure and compare them with my D60.

Be new info. Before the review sites post their noise figures. So keep watching the blog.

*edit* Not new info now--but first time bloggers have high hopes of scoping the big sites.

If you see this edit you are the fifth visitor ever, all today, to read this post. Put a comment in the comment box--so I know there is one-- and I'll think up a suitable prize.


To be continued:

Thursday, November 5, 2009

Hype and Noise

First the hype. Adobe has a new Lightroom 3 beta up on its web site. In the hype pdf they talk about their great new noise reduction algorithm. So I downloaded the beta to see if the new program would be useful during the YSP project--with over 800 images on the disk it's moved from a shoot into a project.

The first thing I noticed when I tried to use noise reduction was half of it wasn't there. The slider for color noise reduction--the ugly blotches of color caused by the demoisaicing--worked. But the slider for luminescence noise--removing the actual noise--was grayed out. A closer reading of the hype on noise reduction--a marketing song and dance to hide the obvious--Adobe might have a gold metal algorithm somewhere but it ain't implemented yet.

So beta 3 isn't going to help the YSP project. Except that it has me looking into how hard I can push the camera using the Lightroom 2 noise reduction. Which may be a bit farther than I first thought.

During the first shoot, I took the first image at ISO 400, 1/125 sec and f5.6. It is, as you can see, a bit on the dark side-3 to 4 stops underexposed. You should barely see Sasha, the girl on the left, since she's in the spotlight. Whether you see Laura, the girl on the right, depend on how your monitor is adjusted.

In Lightroom I upped the 'exposure' 3 stops. for the second image. (what actually going on when you do this is worth a post sometime)

As long as you don't look close image two isn't a complete disaster. But if you do look close--image three, a closeup of Laura's head--you see the noise in all its glory.

But noise can be reduced. In image four I applied Lightroom 2's noise reduction routine.


It has softened and lost detail,an inevitable consequence of noise reduction. Click on the images for a larger view. Compare the hair for the softening and the sleeve and background for the noise reduction. It's not perfect but I've done worse.

So where are we now. If I use my 105mm manual lens wide open at f2.5 I've picked up 2.3 of the 3 stop of underexposure. Since ISO 800 won't add that much more noise, something I decided in the last post, this lens will give a decent exposure.

But now I have another lens I could use. My 55-200mm kit lens is an f4 at 55mm, f4.2 at 70mm and f4.5 at 85mm. So I would lose a stop or more by switching over to that lens.

But it is image stabilized. I could drop the shutter speed to 1/60 or 1/40 sec since I don't have to worry about camera shake. That would pick up the lost stop of exposure.

Time for more experiments. I will duplicate the theater's lighting at home with my variable power flash and start shooting away. With luck, ISO 1600, and noise reduction I might even be able to take onstage headshots at 200mm and f5.6.