Today Bing is announcing a revamp of its front end, to make its search results more useful for users. But what’s much more interesting is what’s happening on the back end, underneath the hood, as Microsoft re-architects how the data used for search results is collected, stored, and repurposed.
“We decided we needed to reinvent search,” Bing director Stefan Weitz tells Fast Company.
Is that all?
When Google was created over a decade ago, the Internet was basically a set of pages. Google’s innovation was to develop a system that would locate what pages existed on the Internet and then determine how relevant each was to specific keywords. For that, it created an architecture of “indexing” and “ranking” pages. Thus was Google’s famous search algorithm born.
Bing, however, believes the approach of indexing and ranking pages is no longer sufficient. Helping people find answers shouldn’t just be about pointing you to documents elsewhere and making you do the work of clicking over and figuring out if they have the information you need.
Instead, Bing thinks search should be more about helping you get things done right in the search results themselves. In cases where they can reasonably predict what you’re trying to accomplish, search should provide widgets in the results that let you get the job done. For example, if you enter the keywords “The Avengers Chicago,” it’s reasonable to assume you’re looking for movie show times. Why not post a list of show times right in the results, instead of making you click over to a page of moving listings? (See: Bing to Lap Google in Making Search an App?)
But to provide that level of service, a simple list of pages on the Internet isn’t enough. You need a whole different kind of database on the back end. And that’s what Bing is working on creating.
“Our goal is to model every object on the planet,” Weitz says. So far the company has compiled a database of 300 million objects, from computer mice to buildings. As Bing’s bots crawl the web, they identify pages that have information about those objects. And then they use that information to develop an understanding of what kinds of things people might want to do with that object.
“We’re literally no longer indexing text,” Weitz says. “We’re trying to associate data that exists on the web in all forms with the physical object that spawned it in the first place.”
The goal is to use the understandings they’re gathering, and the data they’re collecting, to identify and build the kinds of apps for search results that will help you accomplish tasks without making you click over to a second page.
“It’s so important for us to understand the object layer of the web because that tells us what actions people can perform on it,” Weitz says. “So suddenly, you go from asking a question like, ‘Where do I buy a mouse?’… to knowing that there are different applications that can fulfill that request.”
Google, of course, is also working on this problem. Back in March, a Google executive told the Wall Street Journal the company had compiled a similar, massive encyclopedia of “entities,” or people, places, and things.
And they’re not the only ones. Facebook reportedly is also compiling a database of things that people talk about and share on the social network. And Apple’s Siri is also working on the problem, so it can understand the questions users ask it.
All of which is to say that a massive game change is waiting in the wings in the search business. And it won’t just affect the search competitors as they race to develop the most powerful models. It will also affect the SEO industry that has spent the past decade trying to master page rank. The more the search engines create apps to help people accomplish tasks right in the results, the fewer referrals they’ll be making to outside pages.