Work today isn’t working for lots of people. Wages for most Americans haven’t increased above inflation in 40 years. Real unemployment–which includes people no longer looking for work–is above 10% (at least double the headline rate). Full-time jobs with benefits and protections are growing rarer. More than 15% of workers are now employed on-contract or temporarily, one recent study showed. And every indication, from holographic secretaries to Amazon drones, suggests that the workplace will continue to splinter. Robin Chase, cofounder of Zipcar, put it succinctly: “My father had one job in his lifetime. I will have six jobs in my lifetime, and my children will have six jobs at the same time.”
We don’t know what the future of work looks like–whether it will be a wonderful upgrade on today’s conditions, or some kind of dystopia where wages are meager, robots are everywhere, and inequality is rampant. But we can be fairly sure the policies we have today aren’t the ones we’re going to need. If we’re going to cope with the age of advanced automation, and manage the general shift to “flexibility” and worker independence, we need to rethink how we support work going forward.
On the campaign trail, Donald Trump offered simple explanations for what ails American workers. Immigrants are taking your jobs, he said. Trade policies cooked up by distant elites are closing U.S. factories. Overregulation is stopping companies from hiring more people. And, since coming to office, Trump has acted in these areas, killing the Trans-Pacific Partnership (TPP) and ending countless regulations, including ones focused on financial services and oil companies.
It’s questionable, however, whether Trump’s solutions will deliver a significant uptick in employment. Compliance is certainly a burden for companies. But ending, say, a rule that requires financial managers to look after their clients seems unlikely to deliver jobs in the American heartland. And, according to economists, automation is a much more important reason for job losses than globalization. Expanded trade with China killed up to 2.4 million U.S. jobs between 1999 and 2011, one recent paper found. Despite the moving jobs, American manufacturing productivity is at an all-time high. Robots, it seems, allow manufacturers to produce more output with fewer people.
More to the point, Trump’s rhetoric doesn’t deal with the changing nature of work–the way more of us are working independently, for instance. It’s about bringing back jobs that were lost–nostalgia–not about looking for new forms of sustainable employment. McKinsey says about half of all paid activities in the global economy “have the potential to be automated by adapting currently demonstrated technology,” suggesting enormous churn in the years ahead, even in economic areas that have been relatively unaffected up to now. That includes lab technicians, web developers, lawyers, and even managers.
It’s possible that work as we know it is ending. In the future, we’ll need fewer people to provide the stuff we need, and work will no longer be the universal provider it has been. These days, a lot of work doesn’t pay a living wage (particularly in retail and hospitality), and the relationship between work and reward (the meritocratic dream of America) is breaking down. Many people work hard and yet aren’t paid in accordance with effort, annulling one of the principles that built this country. At the same time, we’ve yet to build a new policy infrastructure to fill the gap between full-time and more contingent work.
Without policy changes, some are already dismissing work as yesterday’s answer and arguing that calls for full employment (where everyone who is willing to work is able to work) are fanciful. “Everybody has doubled down on the benefits of work just as it reaches a vanishing point,” writes the Rutgers University historian James Livingston in his book No More Work: Why Full Employment Is a Bad Idea. “Securing ‘full employment’ has become a bipartisan goal at the very moment it has become both impossible and unnecessary. Sort of like securing slavery in the 1850s or segregation in the 1950s.” Livingston argues that casting even crappy work as a universal solution ends up putting people in a psychological bind. If work is the only way to gain self-respect, and there’s no work to go around, full employment is actually a cruel idea, not an emancipatory one. “The work ethic is a death sentence because [workers] can’t live by it,” he writes.
The trouble with the “fuck work” argument (that’s Livingston’s phrase) is that, for many, work is about more than money. It also provides purpose, meaning, and structure in our lives, getting us out of bed in the morning and stopping us from drinking in the afternoon. It makes us feel part of the collective experience, makes us social, and it gives us respect in our families and among our communities. It’s not easily replaced, which explains why more than half of Americans say they would continue working even after winning the lottery.
“Work is important, not for stern finger-wagging reasons, but because when you see communities where work goes away, you get more negative things than positive things,” says Andrew McAfee, an MIT economist, and coauthor of books such as Race Against the Machine and the Second Machine Age. “If we keep encouraging the kind of work we had two generations ago, that would be a mistake. But we can encourage the type of work that’s increasingly out there.”
One way to fix work for the future is to renegotiate the so-called social contract between employers and workers.
In the mid-20th century, corporations and unions reached historic deals to effectively share in economic growth. Workers agreed to be loyal and not strike, and companies in turn agreed to pay generous benefits and guaranteed wage increases. But, starting in the 1980s, companies started worrying about productivity and profits, and they began outsourcing “non-core” workers, everyone from janitors and customer support staff. As they replaced in-house workers with contractors, franchises, and on-demand workers, they tended to pay lower wages and offer fewer benefits.
Meanwhile, the short-termism of Wall Street pressured companies to reduce investment in training and workforce development, which tends to disadvantage workers with fewer skills, who might once have risen up the corporate ladder. “While shareholders and management reap their rewards, workers are experiencing less wage growth, less security, and less upward mobility,” is how a recent bipartisan report from the Aspen Institute puts it.
This isn’t necessarily an anti-corporate message. U.S. companies have faced higher worker liabilities than counterparts in other countries, because benefits here–like health insurance–have traditionally been paid through the employer-employee relationship. In Europe, for example, governments pick up more of the tab. But the shifts in employment have left an uneven labor economy. Today, we have a lot of fully employed people who are well compensated, but also lots of less-than-fully employed people who aren’t.
Meanwhile, the growth of the gig economy has seen a whole new generation of non-payroll staff emerge. Companies like Uber steadfastly treat their workers as subcontractors, thus absolving themselves from having to pay benefits. Paying people under the 1099 part of the tax code, instead of as W-2 workers who receive health benefits, Social Security, Medicare, paid sick days, and vacation leave, saves employers about 30% on each worker, estimates show.
One way to stop this abuse of this binary system would be to set up portable benefit schemes. These would prorate benefits based on hours worked and allow workers to move between gigs and projects more easily. So, for example, a driver who works for both Uber and Lyft could pick up fractionalized benefits from both and accrue money in an universal account. Several regional construction companies already pay into “multi-employer” plans, and unionists and gig companies have advocated for expanding the model more widely.
Another idea is to reclassify workers, so there’s less of a gap between W-2 and 1099 categories. For example, Princeton economist Alan Krueger and Cornell economist Seth Harris have proposed a new compromise category of “independent worker.” This would see employers pay some Social Security and Medicare payroll taxes and allow workers some collective bargaining rights, but wouldn’t give workers benefits like overtime and injury compensation insurance. That in turn might incentivize employers not to push workers off their payrolls to cut costs.
Though the shift to independent working is often portrayed as a bad thing for workers, many people would welcome the change if it paid as well. A big gig economy survey by McKinsey late last year showed that between 40% and 50% of the workforce in the U.S. and Europe would choose to work on their own (20% to 30% already do, in some form).
Other ways to support that shift include setting up community coworking spaces. That would substitute for the loss of social interaction that sometimes comes with independent working and perhaps provide workers with access to collectively owned high-end equipment. For example, the hubs could offer maker labs with 3D- printing machines or professional kitchens.
At the same time, we could make it easier for entrepreneurs to start new businesses. With the growth of Silicon Valley and other startup hotbeds, we tend to think of the 21st century as a bountiful time for entrepreneurs. But, in fact, the rate of new business formation has been falling since the 1970s. Millennials are less likely to start companies than baby boomers, something that could be explained by higher student debt (which encourages young people to take safer jobs). Economists also point to the increasing economic power of big companies relative to smaller ones (due to less aggressive antitrust enforcement), and onerous occupational licensing regulations. For example, if you want to set up a florist in Florida or Louisiana, you need to apply to the state before doing so.
Over the longer term, as IT plays a larger part in the economy, we may need more radical solutions.
In his book The Rise and Fall of American Growth, the historian Robert Gordon argues that today’s inventions aren’t as consequential as those of the past. Between 1870 and 1970, the American economy was propelled forward by electricity, urban sanitation, chemicals and pharmaceuticals, the internal combustion engine, and modern communications. After 1970, growth has been both “dazzling and disappointing,” he says. The major innovations have been in entertainment, communications, and the processing of information, none of which produces seismic numbers of jobs (in fact, many of these technologies are designed to be “labor-saving”). Facebook employs about 17,000, a far cry from the hundreds of thousands of people GE, IBM, or Ford employed in the past.
Of course, it’s possible that someone will come up with a wonderful new innovation that delivers, say, clean energy for all and that also produces million of jobs. (The solar industry isn’t bad: In the U.S., it now employs more people than the coal industry). But, in mature economies, at least, that combination seems unlikely. Most of our basic needs are met, and technology has a tendency to make goods cheaper, or even free, over time. That’s bad for creating jobs and bad for capitalism in general. If, for example, you can produce energy from your rooftop solar panel, you may no longer need take energy from a centralized utility. Once you’ve installed the equipment, energy becomes essentially “zero marginal cost,” which means there’s less revenue for the power company to use to pay its workers. Futurists such as Peter Diamandis imagine a succession of household goods, from food to transportation, undergoing a process of “demonetization.”
Along with growing work automation, the wider progress of technology may increase calls for a universal basic income (UBI), where the state gives everyone enough money to meet everyday needs. The idea already has plenty of fans on the left and right, and particularly in Silicon Valley, which loves big solutions and can surely see into technology’s future better than most of us.
UBI is often criticized for being anti-work, because it’s assumed that if you pay people money all they’ll do is sit around and do nothing. While that’s possible, trials of the policy have shown it not to be the case. Last year, economists studied seven cash transfer programs in the developing world and found “no systematic evidence” that they discourage work. In fact, UBI advocates argue that putting a floor under the neediest will encourage creativity, and shrink the divide between money-work and socially useful work, like looking after children or grandparents. “A basic income says, in effect, there are too few work hours to go around, so we need to inject liquidity into the mechanism that allocates them,” writes the British journalist Paul Mason in his book Postcapitalism. “The lawyer and the daycare worker would both need to be able to exchange hours of work at full pay, for hours of free time paid for by the state.”
Although we’re still a long way from such post-capitalistic future, such ideas are worth considering now and, indeed, several big basic income trials now underway could help us to decide the way forward. Ideally, we’d keep the best parts of work–including the sense of purpose and structure that it can bring–but lose some of the negative side effects, including the increasingly unequal distribution of income and the fact that a lot of work remains dangerous and demoralizing. If we reform work now, the American Dream can be that everyone gets to have the type of work that they want.