Skip to main content

Too Bright, Too Dark

The visible world around us is constantly changing. One second we may have the sun in our eyes, and another we're in a dark closet trying to find the light switch. Such fluctuations in brightness could have ended up being a serious problem for our vision if it weren't for a handy built-in feature: our eyes automatically adapt to the lighting conditions of the surrounding environment.

Most people are well aware that their pupils change size to handle lighting conditions. A larger pupil admits more light and makes the scene brighter, while a smaller pupil admits less light and makes the scene dimmer. This mechanism is fast and effective, and is controlled by the brain stem, which acts autonomously - you don't even have to think about it.

As useful as this type of adaptation may be, it has some limitations. For example, most artificial lighting is 20 times dimmer than the sun, while pupil dilation only brightens light by a factor of 4 (when compared with a fully constricted pupil). And then there are all those backlight situations - like when an unknown person is standing in front of a bright window, and you need to see whether it's a home intruder or a visiting friend. In short, pupil adjustment is insufficient. So what else do we have?

The answer: sensory adaptation. This term refers to the loss of sensitivity that comes after exposure to a stimulus, an ability that is actually shared by all of our senses. If you've ever looked for your glasses only to find them on your face, you've experienced the effects of sensory adaptation. It's useful because it eliminates old data so it doesn't obscure the new data (kind of like what Facebook does with old and new posts).

Our vision is also subject to sensory adaptation: try staring at the same point for 2 minutes, and everything will fade to the same shade of grey. This happens because the retina becomes desensitized to what it sees. Bright light causes the retina to become less sensitive, making the light appear dimmer. Dim light allows the retina to regain lost sensitivity, so the light seems brighter. In addition, different parts of the retina are affected independently, so while one object is darkened, an object right next to it could be brightened. Hence, the bright and dark areas of a scene are both made less extreme, so that the scene as a whole becomes easier to see.

There are two types of adaptation to light in vision, then. The first type affects the whole image, and is accomplished by pupil dilation/contraction. The second type can control different parts independently, and is accomplished by sensory adaptation.

***

What remains is to find a practical application. Obviously, photography is the place to go. A digital camera is very similar to an eye: both detect light, and both relay information to a central processor (whether a computer or a brain). One of the biggest differences between these two tools is the way they adapt to light.

Most cameras have an adjustable aperture. The aperture is the hole that light enters through. It usually contains a system of tiny fins which move to make the aperture larger or smaller, and are adjusted (often automatically) based on the brightness of the scene. This corresponds directly with the pupil in the human eye.

A camera can also adjust the exposure time, which is the amount of time that the shutter is open and light is permitted to enter into the camera. If the camera is digital, it can adjust the sensitivity of the sensor, which involves multiplying the data from the sensor by some numerical value. These mechanisms correspond partially with the second type of eye adaptation: they control the brightness (or, more properly, "exposure") of an image, but they cannot control parts separately. In other words, unlike the eye, a camera always adjusts the brightness of the whole photo equally.

This is actually a pretty big problem, because many photos end up looking completely different from reality. Next time you get the chance, try taking a photo of a person standing in front of a window. Visually, you will see both the person and the background, but the camera will either make the person a silhouette, or will wash out the background.

The solution is a technique called high-dynamic-range (HDR) imaging. In HDR imaging, multiple photos are taken with different exposures. Then, the best parts of each image are combined to form a single image that is more evenly exposed. Some people criticize HDR images for being artificial, but when done well, HDR images mimic human vision much better than normal photographs do.

As an illustration of this technique, I'll describe my process making an HDR image of a chapel at night. I started by taking these four images:



As you can see, the chapel is mostly washed out in the first image, but in the later images, the sky and the surrounding buildings look too dark. Using GIMP, I stacked the images, and used layer masks to make the images transparent based on brightness. The result was an image that showed detail on the chapel, without losing the surrounding buildings or the light on the clouds.


And that's it - a great image out of a number of okay photos. Kind of cool what you can do when you apply biology to technology, isn't it?


Comments

Post a Comment

Popular posts from this blog

Flipping Quarters

Here's an interesting puzzle involving chance: A man in a park asks you to play a game with him. It's a form of gambling. To play, you must pay the man $5, then flip a coin repeatedly until you get heads. As soon as you get heads, you stop flipping. If you only flipped the quarter once, he'll give you $1. If you flipped it twice, you get $2. Three times, $4. Four times, $8. Each extra flip gets you twice as much money, so the longer it takes before you get tails, the more money you get. Should you play, if you have a lot of time and the man will play as many games as you want? How much money, on average, would you gain (subtracting the $5 fee)? I will give the solution in a later post .

Pluto No Longer on the Horizon

This morning, New Horizons became the first spacecraft to make a flyby observation of the Pluto system. During the mission, the spacecraft captured the most detailed photographs of Pluto's surface we've ever had, and possibly ever will have. It also found many new properties including size, mass, atmosphere, and surface composition. In a period of a few hours, we discovered more about Pluto than we've found in the 85 years since Clyde Tombaugh captured its first photograph. Before After  (images credit: NASA) To complete this mission, the spacecraft flew for more than 9 years through the emptiness of space. This may sound like a long time, but it's actually amazingly quick. In fact, New Horizons set the record for the fastest speed at launch, and during the flyby, the spacecraft was moving at a rate of over 30,000 mph, or roughly 50 times the speed of sound. Picture an object twice as heavy as a grand piano moving 25 times faster than a bullet from a gun. Yikes. The man...

Should Tau Replace Pi?

The digits of π, organized in a very new way Happy π-day! And happy π-month! Today's month and day - that is, March 14 or 3.14 - includes the first 3 digits of π. And today's month and year - March 2014 or 3.14 - also includes the first 3 digits of π. We won't have another double-day for π for the next 100 years, so enjoy this one! For the special occasion, I'm posting two π-related posts - one for π-month, and the other for π-day. In both posts, I'm setting the font size to approximately π * π + π + π. This is the first post, for π-month; to see the second, go to http://greatmst.blogspot.com/2014/03/pi-month-pi-day-post-2-5-common-pi-myths.html . In this post, I am including an essay I wrote about whether π or τ is the more superior constant. This was written for people who know very little about math, so the basic idea should be easy to understand even for people who are not mathematically inclined. Should Tau Replace Pi? A constant is any number or value that ne...