advertisement
advertisement

I’ve worked in AI for 2 decades. This flaw in Google’s system shocked me

The Aible founder’s unsettling experience with Google’s advertising AI reveals the importance of ensuring there is human oversight of automated systems.

I’ve worked in AI for 2 decades. This flaw in Google’s system shocked me
[Source images: Ilya Lukichev/iStock; Roozbeh Eslami/Unsplash]

Algorithms that are trained to find patterns in data—commonly known as AI—have already become a transformative force in business. But a recent experience reminded me of some of the fundamental problems with AI algorithms that are trained to optimize for the wrong things—an issue that can create unforeseen consequences, especially when there is no human oversight.

advertisement
advertisement

Earlier this year, I purchased a number of Google ads to attract customers to a product being offered by the AI company I founded. Google has a feature where its AI figures out the best search keywords to associate with your ads, and we decided to go with this simple option.

We got lots of clicks on the ads, but to our disappointment, many of those users barely looked at the product and left our website after only a few seconds. On closer inspection, it turned out that Google’s AI had picked up the word “models” from our ad—we offer AI predictive models—and decided that the best way to optimize clicks was to place our ad in front of people searching for the word “models.” The trouble was, a lot of those people were doing adult-themed searches seeking models of a very different kind. So, when they clicked through to our AI-focused and decidedly G-rated website, they left immediately.

The Google Ad server saw this poor engagement and correctly assumed our content wasn’t what the audience was looking for—not even close. Google then lowered our ad optimization scores, served up our ads less frequently, and charged us more money to get the same results.

All of that really surprised me, and I’ve been working in AI for two decades. Google’s AI wasn’t wrong per se, it was just trained to optimize the wrong thing—the volume of clicks rather than the relevancy of those clicks. And I had no way of knowing things were going very wrong until it was too late. We only figured out the problem by manually reviewing the search terms that the AI had placed our ads next to. By the time the pattern became evident, we had already wasted thousands of dollars. There was no transparency and no human oversight. I had essentially paid Google to lower my company’s search ranking.

As I learned the hard way, problems that AI creates aren’t inherent in the technology, but rather in the way the AI is being trained and what it is being told to optimize. In this case, the AI was almost certainly told to maximize clicks, because that is what generates revenue for Google. Instead, it should have been told to optimize for the business benefit for the customer, especially given that Google was asking us to trust its “smart” system to help us. The AI could have easily been told to optimize for “time on page” or even “conversion activity completed” instead of just optimizing for click-through. Of course, that would have made Google less money—at least in the short term.

You may have run across this dumb AI phenomenon when Amazon or another online retailer serves you an ad for a product that the recommendation AI thinks will interest you. Often, the AI ends up making mistakes that no human ever would. Most of us have had the experience of purchasing an item—such as a replacement toilet seat cover—only to be followed around the internet by ads for the same item. Unless you’re a toilet seat collector, it’s highly unlikely you’re in the market for multiple toilet seat covers. There are plenty of other similar examples of AI getting it very wrong. Visitors to out-of-town hotels get bombarded with ads for the very same hotel a few days after they check out. Facebook sometimes shows people ads for items very similar to the ones they just finished selling on eBay.

advertisement

This problem shows up on social media as well. Because AI learns only from what it has already seen, with every action you take, every tweet you make, and every data point of feedback you give to AI, you’re further refining what the AI is going to present to you. You see that now on your Facebook news feed. The reason you find yourself agreeing with most of the opinions you read is that an algorithm has decided to highlight the stories it believes are relevant to you. This has created an effective advertising business for Facebook—but has created filter bubbles where people aren’t exposed to opinions different from their own.

Often, avoidable blunders are an indication that an AI’s programmers haven’t optimized their algorithm for the right thing and haven’t enabled the users of the model to give it direction. In my case, Google could have asked me to choose from a few different potential business outcomes that I wanted to optimize for. Companies focusing on building their brand awareness might optimize for users who spend time on their landing page, while companies focusing on selling a product might want users who buy or sign up. Moreover, when the AI notices patterns—such as adult-themed model searchers who were clicking on our pages and then leaving immediately—it could proactively flag the pattern to us and ask whether something was going wrong.

The lack of human consideration in particular has become an increasingly big problem with AI. For instance, several years ago, Coca-Cola, Walmart, Starbucks, and several other major brands pulled their ads from YouTube after they learned that the site had been running their ads next to deeply offensive video content. Google (which owns YouTube) had trained the AI to be smart enough to pair ads with videos that get lots of fresh clicks, but too dumb to avoid content that was racist, violent, homophobic, and anti-Semitic. Google’s AI was designed to increase sales, but it wound up damaging brands.

Despite Google’s promises to improve its AI, major problems persist. In June, two independent research groups found that Google ads for organizations such as UNICEF are appearing on sites that publish fake information about COVID-19 and vaccines, helping those sites monetize their content. Last week, Google announced it will ban ads promoting coronavirus conspiracy theories, remove ads from pages that promote these theories, and demonetize sites that violate the policy, starting on August 18.

Google’s AI problems go even deeper. Google’s Keywords Planner, which helps advertisers select search terms, offers hundreds of keyword suggestions related to “Black girls,” “Latina girls,” and “Asian Girls,” the majority of which are pornographic, according to research by The Markup. However, searches for “White girls” and “White boys” return no suggested terms at all.

Now that we’re in the midst of a dangerous and fast-moving pandemic, AI needs human oversight more than ever. A new reality has negated the power of AI trained on old data. We have to expand the notion of AI to find better ways to overlay human intuition and domain knowledge on top of algorithmic approaches. And we must realize that human knowledge and AI working together are far superior to either one acting alone. This is the moment for practitioners of AI to bring in the humans.

advertisement

Arijit Sengupta is the founder and CEO of Aible.

advertisement
advertisement