You might think of it as the Mark Zuckerberg Full Employment Act.
When the Facebook founder and CEO let it be known recently that the social network was doubling the number of people working on digital security and content review to 20,000, it was easy to interpret the move as a singular act of damage control by a company in deep trouble.
But the way Accenture’s Paul Daugherty and Jim Wilson look at it, Facebook’s hiring spree is reflective of a much bigger trend unfolding across the economy: a boom in jobs meant to ensure that systems of artificial intelligence are in step with legal and regulatory obligations, ethical responsibilities, and community standards.
“Facebook is a good example of a company where its technical capabilities got out of ahead of its ethical capabilities,” says Wilson, Accenture’s managing director of information technology and business research and co-author with Daugherty of the new book Human + Machine: Reimagining Work in the Age of AI. “Now they’re playing catch-up.”
Many other companies, Wilson and Daugherty believe, are in the same spot—or will be soon. “We’re at a tipping point,” says Daugherty, Accenture’s chief technology and innovation officer. “There’s a new premium on trust that hasn’t been there before.”
All told, they expect U.S. businesses to eventually hire several million people into these “responsible AI” positions over the next several years. In fact, they say, for every technical job that involves developing AI, corporations will require at least two ancillary jobs to flag errors and bad decisions made by the machines, manage AI performance in light of societal impacts, and so forth.
If they’re right, that’s a huge deal. Two to three million jobs would be on par with some of the largest occupations in the country today: personal care aides, registered nurses, janitors, stock clerks, and waiters and waitresses.
Not everybody buys Daugherty and Wilson’s thesis. “These are just corporate guys with rose-colored glasses,” says entrepreneur Andrew Yang, who has launched a long-shot bid to become president of the United States in 2020 by highlighting the threat of automation and AI to wipe out jobs (and who has proposed that every American receive a $1,000-per-month basic income to help survive the disruption).
Although Yang says that “it’s quite possible some new roles will be created,” along the lines of what Daugherty and Wilson are talking about, he is more than a little incredulous that the total will be substantial. “The numbers just don’t add up,” he asserts.
A Changing World
For their part, Daugherty and Wilson address the prospect of job loss in their book, and maintain that the challenge ahead is largely a skills issue. (To that end, they are donating the proceeds from Human + Machine to nonprofits that are dedicated to reskilling those displaced by technology.)
Daugherty and Wilson fall into a camp of experts who foresee robots not so much eating all of our jobs but, rather, chewing many of them up and spitting them back out. The result: Factories and offices are beginning to function very differently than they have in the past, with people and ever-smarter machines increasingly working side by side.
“The world has changed,” says Joshua Cohen, a philosopher on the faculty of Apple University, the company’s in-house training program. So far, he adds, “there’s not a set of norms” for handling matters of fairness, privacy, transparency, and other areas of concern that AI platforms invariably engender.
But these conventions are starting to emerge—and, as they do, workers are being called upon to tackle assignments that they never had to consider before.
“It’s testing, measuring, and redesigning for when we see . . . unintended consequences” from AI, says Cathy Bessant, chief operations and technology officer at Bank of America, which this month announced that it would be the founding donor of the Council on the Responsible Use of Artificial Intelligence run by Harvard’s Belfer Center for Science and International Affairs.
Hundreds of employees presently engaged in this sort of work at BofA are listed as software developers. “But they aren’t all developers,” Bessant explains. “We don’t know what to call them. We don’t have the right taxonomy yet.”
Nomenclature aside, there’s no doubt that as the deployment of AI expands, so will the need for people to assess and adjust what the machines are doing. “Equations,” says Bessant, “aren’t the same as judgment.”
Trainers, Explainers, and Sustainers
In Human + Machine, Daughtery and Wilson lay out three broad job categories for this new era.
The first, dubbed “trainers,” will instruct AI systems on “how they should perform certain tasks or how they should, well, act a little more human.” For example, an “empathy trainer” will direct AI systems to display more compassion. “Data hygienists” will try to root out any prejudice being programmed in at the front end.
After all, it isn’t just Starbucks employees who could benefit from being made more conscious of their inherent biases. Studies have concluded that, sometimes, “the downsides of AI systems disproportionately affect groups that are already disadvantaged by factors such as race, gender, and socio-economic background,” Kate Crawford, co-founder of New York University’s AI Now Institute, and the University of Washington’s Ryan Calo have noted.
For instance, it has been shown that an algorithm used to help judges determine proper sentencing wrongly pegged black defendants as future reoffenders at almost twice the rate as white defendants. White defendants, meanwhile, were mislabeled as low risk more often than black defendants.
The second job category defined by Daugherty and Wilson is “explainers”—those who will make sense of what’s happening inside the black box of AI for colleagues and customers alike.
“We are focused on the translator roles,” says Deanna Mulligan, the CEO of Guardian Life Insurance, a 158-year-old company where actuaries are being trained by General Assembly to be data scientists and client-service representatives have been taught to program machines via “robot play dates.”
“Our goal,” says Mulligan, “is to help employees understand the power of technology in serving clients, as well as understand the gaps in technology and the role of human coaches.”
The third job category is “sustainers.” As Daugherty and Wilson describe it, these “individuals will act as watchdogs and ombudsmen” so that AI systems comply with government rules (such as Europe’s General Data Protection Regulation, which kicks in next month), as well as adhere to moral values.
Daugherty and Wilson anticipate that some of those who take different trainer, explainer, and sustainer jobs will be technologists. Many will be highly educated. Not everyone, though. “In a lot of cases, we’ll be able to teach people with different backgrounds to do these things,” Daugherty says.
Bessant and Mulligan agree, and both BofA and Guardian Life say they’re committed to making sure that folks at all levels of the company gain new AI-related skills.
Unfortunately, however, they seem to be more the exception than the rule–at least for now. An Accenture survey of 1,200 CEOs and top executives issued in February found that 74% plan to use AI to automate tasks to a large or a very large extent in the next three years. But only 3% of them intend to increase investment in worker training and re-skilling programs over the same span.
Whether demand for “responsible AI” turns into a significant job engine, as Daugherty and Wilson predict, remains to be seen. But this much is beyond question: If corporate America fails to adequately train its workers for these new roles–continuing a long abdication–it will short-circuit the entire scenario.