Google says its virtual assistant is opening its eyes to the visual world. Assistant can look at images shot with a phone and analyzed with Google’s new “Lens” computer vision AI, then provide additional information. It can talk about what it sees through lens.
You can take a picture of a concert billboard, for example, and the assistant can read the text, identify the band, and even provide song samples. If you’re in a foreign country, Assistant can look at a Lens image of the menu, translate the menu, then provide images of what the dishes look like. It can see and identify a painting in a museum and provide more metadata.
Google says you can now type messages to assistant as well as speak them. The company said it’s working hard on the “hard problem of “conversationality,” and is making progress. Assistant is looking a lot like a bot that is quickly increasing its view and understanding of the world, the user, and the context in which it’s being used.