The Automaton blog reports that AeroVironment has been developing a solar-powered swimming robot called Mola, named for the fish that inspired its design: the giant sunfish, Mola mola. Even though it’s a sub-surface swimmer, it can still grab enough solar energy from its solar panels on its flat top to swim at about 4 kilometers an hour, gathering sensor data on the environment around it as it does so. It’s another attempt at creating autonomous marine environmental monitoring systems. But above that, it’s another beautiful example of biomimicry.
Mars robot rover Curiosity is setting off on its first long-distance drive, having made successful test maneuvers. But this video is of a different Curiosity event, and you should watch it for its beauty. Created by Dominic Miller, the clip used NASA’s four-frames-per-second descent camera footage as a starter. This jerky movie is amazing enough for what it represents, but it isn’t easy on the eye. So Miller interpolated the intervening frames using image processing techniques and created a movie-grade 25-frames-per-second clip. It’s like watching a science fiction film, but it’s actually a robot we humans made, really landing on Mars.
Curiosity Rolls. NASA’s one-ton rover is setting off on its first long drive away from its landing spot at Bradbury Landing toward a region that scientists have called Glenelg. This region is nearby, just 400 meters away, and it has promise because three different types of terrain meet here, meaning it’s a great target for investigation.
Berobot. Kickstarter is the home for all sorts of amazing projects, and there’s a neat new one right now that’s all about robots. Berobot, a small desktop robot that’s smartphone-controlled, can dance and avoid obstacles using infrared sensors. Cuter than all of that, it’s shaped like the famous logo for Google’s smartphone OS Android.
Quan’s Robot House Builder Plans. Quantum International Corp. has just revealed its big plans for a robot that can build a home in just one day with almost no input from skilled human builders. Like a giant, smart 3-D printer, the tech will be able to read CAD drawings from an architect and print out layers of concrete to make vertical walls and even dome-shaped roofs. The goal is to build structures faster and perhaps more safely, and with a smaller carbon footprint–but the company’e CEO has stressed he sees the tech as augmenting construction jobs rather than stealing them.
Think about that thing you do when babies start to crawl, then toddle around your home: the careful, sensible, reorganization of precious or dangerous things to shelves above the danger zone. There’s also the removal of obstacles and perhaps the placement of special obstacles like child gates on the stairway and things that clumsy baby fingers can hold. But have you ever thought about the sort of similar work you’d have to do if a robot was roaming your rooms?
Chances are you haven’t–unless you’re a Roomba owner, like me, tired of seeing the thing get stuck under low edges of a bed. But the current state of robot tech, particularly in terms of low levels of machine vision and autonomous navigation, means that before a butlerbot, a simple telepresence machine or even something as complex as an Asimo, arrived in your house you’d have to arrange things very carefully. And if you wanted robots to interact with stuff, you’d likely have to place it in specific places and arrange it so that clumsy robot fingers could grab things like cup handles.
Swedish researchers have a clever idea about helping with this sort of problem, and it’s delightfully low-tech. Kinect@home is a massive crowd-sourced push to scan 3-D images of everyday objects in participants’ homes using the accurate infrared and optical imaging systems in a Microsoft Kinect sensor. The team has made it as frictionless as possible, and it basically involves using a plugin via the project webpage and moving the objects in question before the Kinect’s face.
The idea is to build up a massive object library, so that when robots are blundering about our houses they can look items up in the database to work out what they are and how to deal with them. In order for the database to be successful, it would have to contain almost everything you can imagine from your couch to your TV to slippers, guitars, mugs, toys. Take a look around your room right now, and think how daunting it would be to design a robot that could recognize everything you see. Crowd-sourcing is an excellent solution to this.
Google is working on a related idea, incidentally. It wants to create a database for, well…everything. Using its Knowledge Graph system, Google wants to work out how objects, people, places, events, and other intangibles are connected together. Google is already busy scanning our buildings from the street (and from the inside, too), and it’s not beyond the pale to guess Google will love the idea of Kinect@home–it’s Google Goggles image-recognition system already uses sophisticated object recognition to solve a similar sort of problem that Kinect@home is trying to address; and tied to data from the Knowledge Graph. it could make a powerful tool to aid robot navigation. Other people in the industry are also calling for an attempt to 3-D scan the entire world.
Now, thanks to a neat bit of lateral thinking and a gaming toy, you can help with the plan. It could really help bring sophisticated robots into your home sooner than may otherwise be feasible.
[Image: Flickr user Shaundon]
Chat about this news with Kit Eaton on Twitter and Fast Company too.