Here at Google’s I/O developer conference in Mountain View, California, the company is announcing new features at a furious rate—and so far, many involve using augmented reality and voice commands to create an experience that feels less like the purely gesture-based interface we’ve used on iPhones and Android phones over the past dozen years:
- New search features embed AR content in results, so you can see what New Balance shoes or a shark look like in your real-world surroundings.
- In a restaurant, you can aim your phone’s camera at the menu to see photos, popular dishes, and reviews.
- You can also point the camera at the bill to calculate a tip or split the check.
- Google Go, which targets users in developing economies, will read signs out loud, translating them on the fly if you wish. Google says that this code takes only 100KB of memory and can run on a $35 phone.
- A new Google Assistant feature called “Duplex for the Web” can do things like use a car-rental site to book a reservation for you (unlike last year’s Duplex, it doesn’t call a human on the phone).
- Google is crunching down 100GB of machine-learning models to fit in half a gigabyte, letting the Assistant do more things without having to access the internet, which the company says makes using your voice faster than tapping.
- The Assistant is also getting a Drive mode aimed at minimal-distraction use in the car.
- A new Assistant feature called “Personal References” lets it use facts from your life, enabling voice commands such as “Remind me to order flowers a week before Mom’s birthday.”
These features aren’t arriving as one big bang: Some are previews and some will launch on Google’s own Pixel phones. But they add up to something that feels distinctly Google-y, leveraging its AI chops in ways that Apple and others may have trouble matching.