Gary Smith is the Fletcher Jones professor of economics at Pomona College.
We are told AI is on an inevitable rise and humans simply can’t measure up. In no time, the headlines say, artificial intelligence will take our jobs, fight our wars, manage our health, and, perhaps eventually, call the shots for the flesh-and-blood masses. Big data, it seems, knows best.
Don’t buy it.
The reality is, computers still can’t think like us, though they do seem to have gotten into our heads. Intimidated by the algorithms, humanity could use a little pep talk.
It is true that computers know more facts than we do. They have better memories, make calculations faster, and do not get tired like we do.
Robots far surpass humans at repetitive, monotonous tasks like tightening bolts, planting seeds, searching legal documents, and accepting bank deposits and dispensing cash. Computers can recognize objects, draw pictures, drive cars. You can surely think of a dozen other impressive–even superhuman–computer feats.
It is tempting to think that because computers can do some things extremely well, they must be highly intelligent. In a Harvard Business School study published in April, experimenters compared the extent to which people’s opinions about things like the popularity of a song were influenced by “advice” that was attributed either to a human or a computer. While a subset of expert forecasters found the human more persuasive, for most people in the experiment, the advice was more persuasive when it came from the algorithm.
Computers are great and getting better, but computer algorithms are still designed to have the very narrow capabilities needed to perform well-defined chores, like spell checking and searching the internet. This is a far cry from the general intelligence needed to deal with unfamiliar situations by assessing what is happening, why it is happening, and what the consequences are of taking action.
Computers cannot formulate persuasive theories. Computers cannot do inductive reasoning or make long-run plans. Computers do not have the emotions, feelings, and inspiration that are needed to write a compelling poem, novel, or movie script. Computers do not know, in any meaningful sense, what words mean. Computers do not have the wisdom humans accumulate by living life. Computers do not know the answers to simple questions like these:
If I were to mix orange juice with milk, would it taste good if I added salt?
Is it safe to walk downstairs backwards if I close my eyes?
I don’t know how long it will take to develop computers that have a general intelligence that rivals humans. I suspect that it will take decades. I am certain that people who claim that it has already happened are wrong, and I don’t trust people who give specific dates. In the meantime, please be skeptical of far-fetched science fiction scenarios and please be wary of businesses hyping AI products.
Forget emotions and poems: Take today’s growing fixation with using high-powered computers to mine big data for patterns to help make big decisions. When statistical models analyze a large number of potential explanatory variables, the number of possible relationships becomes astonishingly large–we are talking in the trillions.
If many potential variables are considered, even if all of them are just random noise, some combinations are bound to be highly correlated with whatever it is we are trying to predict through AI: cancer, credit risk, job suitability, potential for criminality. There will occasionally be a true knowledge discovery, but the larger the number of explanatory variables considered, the more likely it is that a discovered relationship will be coincidental, transitory, and useless–or worse.
Algorithms that monitor word usage on Facebook or Twitter to evaluate job applicants might find spurious correlations that are poor predictors of job performance, but have disparate impacts on different genders, races, sexual orientation, and ages.
In 2016, Admiral Insurance developed a car-insurance algorithm that considered whether people liked Michael Jordan or Leonard Cohen on Facebook. A few hours before the scheduled launch, Facebook said that it would not allow Admiral to access Facebook data; Facebook may have been less concerned about discrimination or privacy than the fact that it has its own patented algorithm for evaluating loan applications based on the characteristics of you and your Facebook friends.
More recently, an Amazon job-application algorithm that was trained primarily on the resumes of male engineers reportedly “penalized” resumes with the word “women” in it. Amazon eventually killed the software once it became evident that, despite their best efforts, its engineers couldn’t be certain the algorithm still wasn’t discriminating against women.
In 2017, the founder and CEO of a Chinese company behind an AI lending app argued that, “While banks only focus on the tip of the iceberg above the sea, we build algorithms to make sense of the vast amount of data under the sea.” What useful data are under the sea? You might be surprised to learn that it is all about the smartphones.
The CEO bragged that, “We don’t hire any risk-control people from traditional financial institutions . . . We don’t need human beings to tell us who’s a good customer and who’s bad. Technology is our risk control.” Among the data that show up as evidence of a person being a good credit risk was how often incoming calls are answered. Not only is a propensity for answering phone calls nonsense, it surely discriminates against people whose religious beliefs forbid answering the phone on certain days or at certain times of the day.
Computers cannot assess whether the patterns they find are meaningful or meaningless. Only logic, wisdom, and common sense can. Just ask the veterans of the 2016 Hillary Clinton campaign , which relied heavily on a software program and ignored–until it was too late–the human pleas to pay attention to Michigan and Wisconsin. Seasoned campaign workers knew there was an enthusiasm deficit, but enthusiasm is difficult to measure, so the computer ignored the experts.
The situation is exacerbated if the discovered patterns are concealed inside black boxes, where even the researchers and engineers who design the algorithms do not understand the details inside the black box. Often, no one knows fully why a computer concluded that this stock should be purchased, this job applicant should be rejected, this patient should be given this medication, this prisoner should be denied parole, this building should be bombed.
The combination of AI with digital advertising and highly personal data broadens the scope of the potential damage, as in the case of Trump campaign contractor Cambridge Analytica. The avalanche of personal data collected by business and government is being used to push and prod us to buy things we don’t need, visit places we don’t enjoy, and vote for candidates we shouldn’t trust.
In the age of AI and big data, the real danger is not that computers are smarter than us, but that we think computers are smarter than us and therefore trust computers to make important decisions for us. We should not be intimidated into thinking that computers are infallible. Let’s trust ourselves to judge whether statistical patterns make sense and are therefore potentially useful, or are merely coincidental and therefore fleeting and useless.
Human reasoning is fundamentally different from artificial intelligence, which is why it is needed more than ever.
Smith is the author of The AI Delusion, published this month by Oxford University Press.