Nao, from young French firm Aldebaran Robotics, is one of the better known small humanoid education and research robots–but he’s about to be replaced: By Nao Next Gen. The new robot is an evolution of the existing design, but it may also be a revolution because the number of tweaks is significant. As well as boosting the in-robot cameras to a twin HD-video solution, the team has given the bot an upgraded Atom 1.6GHz CPU, a better walking algorithm, better control of its servo’s torque so the robots movements are more fluid and powerful, and a few other tweaks.
But most significantly they’ve added in voice recognition from Nuance–the same innovative firm whose technology is behind the amazing powers of Apple’s Siri digital personal assistant in the iPhone 4S. Of course Nao will react differently to voice commands than Siri, and it’s unlikely to call your wife or set up a calendar entry on voice command. But it’s significant because it introduces a much more natural control interface to the robot, thanks to Nuance’s impressively accurate voice recognition. And that hints at a near future of household robots that one can talk to–butler-bots with powers that actually are like Siri’s, including things like the ability to act on the request: “Robot, answer the doorbell please.” All it’ll take is enough research and development, perhaps aided by the new Nao itself.
This week saw what is billed as the first of its kind of airshow–an aerial ballet of UAVs in American airspace, specifically over Albuquerque as part of a convention organized by the Technical Analysis and Applications Center from New York State University. Four models of electric motor-powered UAVs were on display, beaming back live video to show off their surveillance powers and their remote-controllable agility.
The demonstration is timely, as a debate is opening up at the moment concerning the future of UAVs in U.S. airspace. “Drone Journalism? The Idea Could Fly In The U.S.” was a story in the Washington Post this week, discussing a possible future for privately paid aerial surveillance after the FAA proposes new rules in January about allowing UAVs to fly in ways that’re currently not permitted. Considering the way ground-level journalists have been prevented from accessing Occupy sites across the U.S., the implications of a change like this are huge.
Iran’s news agency is alleging that an American drone invaded its airspace on December 4th, and that they somehow managed to bring it to the ground and capture it. Now the Iranians have shown off their purported prize–an RQ-170 Sentinel, an air vehicle still classified in the U.S.
Some U.S. officials, while staying within the limits of secrecy, have suggested that the drone does indeed appear to be real. The drone is a highly stealthy and autonomously flying craft, designed to pass unnoticed into otherwise interdicted airspace and return detailed data on the battlefield situation. In form it’s similar to the famous B-2 stealth bomber, and the chief worry is that the Iranian authorities will share the secrets found in the RQ-170 with Russian and Chinese allies (assuming the internal code hasn’t been automatically erased by security features). Luckily, many of the stealth features of the robot are up to 35 years old.
iRobot Hits iPhone (But Not As You Think)
The Roomba may be one of the few robots you’ve met in real life–it’s an innovation by iRobot that’s designed to save you the bother of vacuuming your home. And now its popularity has risen to a new level: It’s now the star of an iPhone game, this time imbued with a personality and teeth and noses and eyes. The more dust you vaccum up in the game, the more you build up strength.
Isaac Asimov’s three laws of robotics (such as “a robot may never injure a human being, or through inaction allow a human to come to harm”) are well known among sci-fi fans, but they are pure fantasy–albeit highly logical. But as robots come ever more into our lives, and eventually gain a degree of autonomous behavior (even if the artificial intelligence of sci-fi fame is still a way off) then we’ll have to start shaping our laws around them: Which is why the first ever We Robot convention has now been scheduled for April 2012. The goal is to discuss legal and policy matters that will come into play when robots intersect with human lives. If you’re dubious this needs to happen then ponder this question–who’s to blame if a Google self-driving robot car accidentally runs over a pedestrian on a crosswalk?
Telepresence may become an important trick for remote workers, because thanks to the way our human minds work we relate slightly differently to someone when they’re physically present versus being present by phone–and the moving, animated, video feed-sporting telepresence bots are about as close as we can get right now to remotely projecting ourselves into a remote location. The thing is, they tend to be pretty emotionless interactions because all that’s conveyed to the bot, typically controlled via computer like a game, is voice and image, which is something research at Japan’s Toyohashi University of Technology may change thanks to a telepresence innovation that lets the user “feel” what the robot’s up to.
The feedback comes via a haptic belt, which buzzes in directions to let you know if the robot is approaching an object, and stereoscopic goggles which give the wearer a real 3-D view of what the robot’s field of view is. The belt also has motion sensors to let you control the robot’s movements a little more naturally–by leaning, as if you were riding a Segway. Clever, and no doubt a much more immersive experience. The invention makes us wonder if soon you’ll be able to shake a telepresence robot’s hand in a business situation, and thus virutally shake the hand of its operator.