For half a century, we’ve known that the human visual system has an exquisite ability to lock focus on an object instantaneously, a feat that no digital camera or camcorder has yet matched. Now, finally, scientists have developed a good guess about how we might do that, and have translated that method into software that could make its way into cameras that never lose focus.
Johannes Burge, a postdoctoral fellow at the University of Texas, and his advisor Wilson Geisler, wondered how it was that the eyes of humans and many other animals were able to focus so much more efficiently than most digital cameras. In a traditional autofocus system, the camera uses only one piece of information about a scene to determine whether or not an object is in focus–its level of contrast. Contrast, says Burge, isn’t always a perfect proxy for focus. But it’s worse than that: To determine in which direction to re-focus, a camera must first change its point of focus and compare the new image it captures with the old one, to determine whether or not the object in question has a higher or lower level of contrast. Often, the camera isn’t even re-focusing in the correct direction when it captures this second image. This method of “guessing and checking” is “slow and not particularly accurate,” says Burge.
Burge’s and Geisler’s approach is different. As they outlined in a recent paper in the Proceedings of the National Academy of Sciences, their software algorithm can analyze any still image captured from a scene and instantly know how to re-focus a lens to bring it into focus. It requires no before-and-after comparison. The way it works is that it takes an inventory of the features in a scene.
Even though any two instants that a camera might capture are very different from one another, they tend to have a number of fundamental similarities. It’s this built-in knowledge of what to expect–in essence, a statistical distillation of what the physical world looks like–that allows their algorithm to instantly know the degree and direction in which a scene is out of focus, yielding information about how much to change the focal length of a lens (biological or mechanical) in order to bring it into focus.
There are already cameras on the market that are capable of “instant autofocus,” but even these systems are merely optimizations of existing autofocus systems and aren’t as fast or as accurate as the new algorithm. Canon’s autofocus system, for example, relies on an external sensor to determine the distance to an object, which then narrows the range over which the traditional autofocus system has to search for the true focus.
This is a less-than-optimal solution and precisely the inverse of how things are done in nature. “In some animals [an accurate estimate of blur in an image] is the primary way they sense distance,” Geisler told ScienceNow. If Canon were to move to the new algorithm, it would eliminate the necessity of an external sensor. It would also add an intriguing piece of information to every image–the precise distance to any object in a scene.
The team has already applied for a patent on the technology, which could apply to digital cameras, digital video cameras, visual robotics, digital microscopes, and image-driven microfabrication devices. If it were implemented in a point-and-shoot camera, the system would focus in as few as 10 milliseconds, “[which] should decrease the lag time between when the shutter is pressed and when the photo is snapped, and should improve focusing accuracy,” says Burge.
It should be noted that this approach to auto-focusing differs from that of the recently announced Lytro camera, which does no focusing at all, except in software. But who knows–perhaps there is an animal out there that uses an algorithm like that in Lytro to accomplish the same feat. In which case engineers who ignore the solutions inherent in nature will have re-invented the wheel yet again.
[Image: Flickr user 12 x 12]