The end goal of the Internet of Things is to make every object in your life programmable. But our smart objects are still pretty dumb. They don’t talk to each other, and most are only capable of doing one thing; a smart lightbulb for instance can dim and brighten but it can’t tell your TV to change the channel for you. The result of three years of research at MIT’s Fluid Interfaces Lab, Valentin Heun’s Reality Editor aims to address these problems. It’s an augmented reality app that lets you link the smart objects around you together, just by drawing connections with your finger between them.
An example here will be helpful in understanding the concept. Let’s imagine you have a smart thermostat, and you want it to raise the heat in your house when you get out of bed in the morning. Provided you also had a smart bed–more on that in a minute–you could just look at your thermostat through the Reality Editor smartphone app. A Minority Report style overlay would then pop up, giving you options. You’d then trace your finger from a virtual circuit that raises the temperature of your thermostat to a circuit on your bed that can detect when you climb in or out.
Of course, there are no smart beds. Yet. But there’s no reason there couldn’t be, and the Reality Editor is trying to make that, and all sorts of other smart objects, possible.
“Imagine a future where everything around you can be controlled,” Heun says. Right now, companies like Amazon and Google, which are designing objects for the smart home, use artificial intelligence and big data to try to anticipate users’ needs: in the case of the smart thermostat Nest, what temperature someone likes, for example. But Heun says this is reductive, because the Internet of Things should actually empower users to have more control over the world around them, not take it away.
The Reality Editor (and its free associated developer platform, Open Hybrid) aims to give users this power to fully control the smart objects in their lives. And if one object doesn’t have the functionality they want? They can just link it to another one that does: a sort of meatspace IFTTT. Imagine having a lamp that turns down the volume on your TV when you dim the lights, a light switch that also turns off your television, or a car that switches on your home A/C in the summer when you leave the office. The Reality Editor makes all of this possible.
Although it looks futuristic, the Reality Editor isn’t one of MIT’s usual high-falutin proofs-of-concept. It really works, and you can download it now. It uses fingerprint-like codes that sort of look like Pentagram’s new identity for the MIT Media Lab to identify smart objects when viewed within the app. (Soon, Heun tells me even these codes won’t be necessary: the app will be able to identify an object based solely upon its color and shape.) It then calls up a literal HTML webpage, representing that object’s corresponding functionality, and overlays it on the gadget so you can program it.
Right now, the problem facing the Reality Editor is support. If you have the know-how, you can build adapters for all the major smart objects, like the Philips Hue smart lightbulb and Nest, but no consumer products yet support the Open Hybrid platform out of the box. That’s something Heun hopes will change soon, because he thinks people have an almost primal need to tinker with the objects around them: something that, without the Reality Editor, could become increasingly difficult as our homes get more high-tech.
“It goes to the deep origin of humanity: we’re tool makers,” says Heun. “We build empowering tools, to manipulate the world around us.” The Reality Editor sees an IoT future in which humans have their smart homes on autopilot, and aims to give them an alternative: take control.