Lytro: The $50M Tech That May Change Photography Forever

The startup’s capital comes from big names like Andreessen Horowitz and Greylock, and its tech team includes a cofounder of Silicon Graphics and the man who was the chief architect for Palm’s revolutionary webOS software. So what’s the fuss all about?



That click you just heard? That was the sound of photography as we know it changing.

Lytro is a Silicon Valley startup that’s building on research carried out by CEO Ren Ng at Stanford, and its promise is simple: With its light field camera hardware and software, it could change photography in an almost unimaginable number of ways–starting with the thing that most news sites have picked up on this morning, the lack of a need to focus a photo.

Meanwhile, Lytro’s $50 million in startup capital has come from big names like Andreessen Horowitz and Greylock, and its technological team includes a cofounder of Silicon Graphics and the man who was the chief architect for Palm’s revolutionary webOS software. So what’s the fuss all about?

It’s called light field, or plenoptic, photography, and the core thinking behind Lytro is contained neatly in one paper from the original Stanford research–though the basic principle is simple. Normal cameras work in roughly the same way your eye does, with a lens at the front that gathers rays of light from the view in front of it, and focuses them through an aperture onto a sensor (the silicon in your DSLR or the retina in your eye). To focus your eye or a traditional camera you adjust the lens in different ways to capture light rays from different parts of the scene and throw it onto the sensor. Easy. This does have a number of side effects, including the need to focus on one thing. This adds complexity, and, if used well, beauty to a photo.

But Lytro’s technology includes a large array of microlenses in front of the camera sensor. Think of them as a synthetic equivalent of the thousands of tiny lenses on a fly’s eye. The physics and math gets a bit tricky here, but the overall result is this: Instead of the camera’s sensor recording a single image that’s shaped by the settings of your camera lens, aperture and so on, the sensor records a complex pattern that represents light coming from all the parts of the scene in front of it, not just the bits you would’ve focused on using a normal camera. The image is then passed to software which can decode it.


And this is where things get freaky. Because the system captures data about the direction of light rays from the scene, it can be programmed to “focus” on any depth in the photo–years after you took the original image. In an instant images can be more ideal, cameras could do away with bulky, power-hungry and expensive focusing systems, photos can be snapped much more quickly and the average Joe Public doesn’t need to worry about focusing an image. The lenses also capture light in low lighting conditions that would previously have needed a flash. That’s a big enough impact, and it could have enormous repercussions for the whole camera industry.

But that’s not all. Because the image is finalized in a computer, you can process it in a number of ways–including stacking together images focused on different things into an animation that could reveal much more detail of the original scene. Imagine a war photo where you can focus all the way from the nearby friendly fighters, down the barrels of their guns, over the barricade and into the eyes of the enemies down the street. Imagine Internet adverts that dynamically move through the detail of a dress in a fashion shoot. Imagine Harry Potter-esque front page images on your tablet PC-edition newspaper.

And there’re other implications: There’s no reason you couldn’t stack two plenoptic cameras side-by-side and generate some truly brain-tricking variations on 3-D imaging. And theoretically you could generate video using the lenses (although the computing burden might end up being very significant) and that could open the door to movie special effects that may make the Matrix look like a Victorian magic lantern show.

That’s why there’s all this excitement about Lytro. And the excitement persists even though the company is taking the unusual step of launching its own cameras later in the year, rather than licensing adoption by other more established billion-dollar photography names like Canon and Nikon.

But plenoptic imaging isn’t something you can trademark, necessarily, and it’s likely that in the same way James Dyson‘s revolutionary vacuum cleaners forced changes on long-established designs across that market, Lytro’s system will push other makers to develop their own similar tech. Indeed it’s pretty likely, given that Ng’s 2005 paper notes it’s “a simple optical modification to existing digital cameras that causes the photosensor to sample the in-camera light field. This modification is achieved without altering the external operation of the camera.”


And yet: Lytro may still have changed photos forever.

[Image: Research paper, Stanford 2005]

Chat about this news with Kit Eaton on Twitter and Fast Company too.

Read More:
Leica Updates the 35-Millimeter Camera


About the author

I'm covering the science/tech/generally-exciting-and-innovative beat for Fast Company. Follow me on Twitter, or Google+ and you'll hear tons of interesting stuff, I promise. I've also got a PhD, and worked in such roles as professional scientist and theater technician...thankfully avoiding jobs like bodyguard and chicken shed-cleaner (bonus points if you get that reference!)