Shropshire Star

Engineers at Stanford University believe they have the blueprint for the camera of the future

A prototype single-lens, wide field of view, light field camera could help robots and virtual reality in the future.

Published
A long lens camera

Researchers from Stanford University have built a prototype camera they believe can improve robotic vision and virtual reality by offering high-detail 4D images.

According to engineers from the California university, the camera, which can also capture a wide field of view of nearly 140 degrees, is capable of capturing depth, object transparency and other details in an image.

The information it can gather could help improve vision for robots in the future.

Assistant Professor Gordon Wetzstein, left, and postdoctoral research fellow Donald Dansereau with a prototype of the monocentric camera
(L.A. Cicero/Stanford University)

Donald Dansereau, a post-doctoral fellow in electrical engineering and a key figure on the project said: “We want to consider what would be the right camera for a robot that drives or delivers packages by air.

“We’re great at making cameras for humans but do robots need to see the way humans do? Probably no.”

The engineers said that currently, robots have to move around and analyse scenes from different perspectives in order to understand certain aspects of their environment. They claim, however, that their camera could gather most of this information in one image.

 Donald Dansereau holds a spherical lens like the one which is at the heart of the panoramic light field camera
(L.A. Cicero/Stanford University)

“A 2D photo is like a peephole because you can’t move your head around to gain more information about depth, translucency or light scattering,” Dansereau said.

“Looking through a window, you can move and, as a result, identify features like shape, transparency and shininess.”

long lens camera
(Stephen Pond/EMPICS Sport)

“It could enable various types of artificially intelligent technology to understand how far away objects are, whether they’re moving and what they’ve made of,” he said.

“This system could be helpful in any situation where you have limited space and you want the computer to understand the entire world around it.”

Sorry, we are not accepting comments on this article.