advertisement
advertisement

This Robot 3D-Scans Any Object By Dunking It In Water

Scientists in Israel have turned normal, everyday water into a 3D-scanning “sensor.”

You know that feeling when your optical-tomography rig just can’t seem to handle 3D-scanning an object with occluded regions? So annoying! Ah, first-world problems. Well, now there’s a solution: a robot arm that can accurately capture the 3D structure of complex objects—even their interiors—by repeatedly lowering them into a vat of water. Here’s what it looks like in action:

advertisement

This kind of 3D-scanning is often essential to industrial design. If you can “import” a high-fidelity virtual model of an object into a computer, you can reverse engineer it, inspect it, or non-destructively test it. The business-as-usual way of obtaining these scans—known as computed tomography—relies on expensive, bulky equipment that uses lasers or X-rays to get the job done. Researchers at Ben-Gurion University of the Negev in Israel wondered if there might be a simpler way.

Their proof-of-concept, called “dip transform,” relies on an ultra-low-tech insight first grasped by Archimedes two millennia ago. His “theory of fluid displacement” indicates that the volume of water displaced by an immersed object is equivalent to the volume of that object. By dipping an object in water and measuring its fluid displacement, the researchers can mathematically obtain a series of “slices” of the object’s 3D form, and then aggregate them into a virtual model.

[Photo: courtesy Ben-Gurion University]
This has practical advantages, according to Andrei Sharf (one of the authors of the paper, which was presented at SIGGRAPH last week): “Unlike optical sensors, the liquid has no line-of-sight requirements. It penetrates cavities and hidden parts of the object, thus bypassing all visibility and optical limitations of conventional scanning devices.”

Turning normal, everyday water into a 3D-scanning “sensor” is some feat. But as with any proof-of-concept, there are still some bugs to work out before the technology has any chance of competing with standard computed-tomography technology. For one, the robot has to dip an object 500 to 1,000 times in order to create a high-fidelity scan, “which introduces a temporal bottleneck,” says Sharf. In other words, it takes for-fricking-ever. But, like we said above: first-world problems.

advertisement
advertisement

About the author

John Pavlus is a writer and filmmaker focusing on science, tech, and design topics. His writing has appeared in Wired, New York, Scientific American, Technology Review, BBC Future, and other outlets

More