The most unbelievable part of Microsoft’s eye-catching Productivity Future Vision video, released earlier this week, isn’t the see-through refrigerator, the software app that discovers a product design breakthrough on its own, or the plants growing on the wall of the ethereally white office. (That last one, actually, is a real office on the Microsoft campus.)
No, the most unbelievable part is how clean every surface is–in the car, in the office, and even in a kitchen where a bake sale project is underway. David Jones, acting director of the video’s creator, Microsoft’s Envisioning Lab, says it’s a common criticism, but then again, nearly every surface can act as a crisp, contextually savvy display. Wouldn’t you keep your counter more clean if you could watch Hulu on it?
We interviewed Jones to glean some insights from the video, and from other work being done at the Envisioning Lab, where the job is figuring out what the future of work and computer interaction might look like, inviting customers in to test it out, and working with other divisions to gather ideas and implement them in Office and other projects–projects like HoloDesk.
Watch the video, then read on to get the details on how that future works.
To take a stab at showing what we’ll be doing work in the future, you must have to come up with some conception of what we’ll actually be doing then. The work in the video seems pretty high-level, knowledge-based work. Is that something you anticipate a further shift towards?
Yes, we do, but that’s not everybody’s work, necessarily. One of the challenges with the video is asking ourselves, “Who are the characters? Do they need to go to an office to work? What hardware do they use, what software do they use, what does their office look like?”
We went with a green theme, work that was good for the environment, and we ended up choosing a “Greenwall,” a living plant wall that changes the atmosphere of the room. The problem with picking any particular industry–law firm, dental office, manufacturing–is that we want it to be general enough that everyone can relate to it. We have a mom who’s traveling to Johannesburg, a concierge at a hotel, an engineer providing research for a model, workers in that office … we develop the story in quite a bit of detail, actually. But what we tried to do is show people collaborating with one another, analyzing data, and making decisions.
Speech input has a really big role in the video. It’s something Microsoft sees as really important in the future, I’d take it?
The general theme is really more natural user interfaces: speech, gestures, touch.
It’s very noticeable, and the camera angle is distinctly overhead, when it shows one person actually typing on a keyboard.
Typing is one of the fastest ways to input data–test after test shows it. We’ll definitely be typing in the future. But we believe speech recognition is something we’ll see much more of in the future. But you also see a little girl hand-write a note to her mom, and the mom draws a heart shape in the air to deliver to her daughter. The best thing we could do in the future is make it easy to use whatever is the most appropriate venue for your input.
That was a theme that came up with the coverage of (iPhone 4S voice assistant) Siri–that voice input seemed so much better, but people weren’t necessarily comfortable walking down the street, saying out loud, “Remind me to buy expensive chocolate for my wife …”
You’ll notice that (video character) Qin, when he looked at a voicemail in the subway station, it provided him with a transcription of the message, and gave him a text interface to answer, rather than playing it or looking for speech.
Because it knew he was in that subway station.
It knew exactly where he was. Contextual awareness, so that the information is personalized and contextually relevant, and brought right to you. When you see the guy typing in his office, too, you notice that information was brought up to support his point while he was typing.
The second-coolest thing in the video, as I see it, is the bit where Qin, while waiting for the train, gets a list of tasks he could accomplish before his train arrives. That’s pretty nifty.
He enters what we call “Five-Minute Mode” … His devices looks through his email for simple questions he could answer. It looks for a quick voicemail he could listen and respond to. It knows he buys coffee all the time, and it knows there’s a cart right up the stairs. It suggests he call his friend for her birthday.
… People are overwhelmed by the amount of information in their lives. And computers are not quite saying yet, “Here’s what you should be doing with all that information.” We have access to all that data, all that enterprise (of people). Computers should be making people more productive, based on what they know can be done.
The coolest, of course, is the fridge you can see through, that labels everything inside it. How close are we to that?
Actually, fairly close. There’s a transparent OLED display, products with embedded information, linked to a database of recipes …
So, possible, but entirely not affordable.
Not at all affordable, but we wanted to put something fun in there. We wanted to show how you could use contextually relevant information in the most surprising of places.
Privacy isn’t brought up explicitly in the video. Is it something that isn’t a main focus of the lab?
It’s a very important issue, in fact, and we talk about it to customers in the lab constantly. You’ll notice that when Iyaz is traveling in the shuttle, she chooses to share her information with the hotel. (That) you have absolute control over how your information is used is extremely important to us … With that information, the concierge knows how many bags she’ll be bringing, how long she’s been traveling for, when she last ate, so he can provide the best possible service. That’s what’s possible, but privacy is a bit part of that.
Note: Q&A has been condensed and edited.