Fast company logo
|
advertisement

A new report from Harvard Law School’s Center for Labor and a Just Economy tracks how companies are using AI—and what steps should be taken to protect workers.

Harvard researchers say these 9 steps can protect workers in the age of AI

[Photo: cottonbro studio/Pexels]

BY AJ Hess3 minute read

Many workers are afraid of artificial intelligence—and for good reason. Some experts say the technology has the power to impact, if not eliminate, workers’ daily tasks, jobs, and privacy. When Fast Company polled LinkedIn users about what they think is the most important workplace issue of 2024, 36% said that the ethical integration of AI was their top concern. 

Harvard Law School’s Center for Labor and a Just Economy, or CLJE, recently released a report on the challenges facing workers in the age of AI and prefaced the report by defending that its findings do not “come from a fear of technology or its myriad uses.”

Instead, the report details the rise of “increasingly sophisticated tech-enabled management and production tools to track human activity” and provides actionable steps leaders can take to build a “proactive model of worker participation that future-proofs labor law, tech policy, and democracy.” 

Here are the report’s recommendations for how to protect workers in the age of AI: 

  1. Mandate an AI Impact Monitor elected in every workplace where AI is being used to monitor, track, surveil, or assess workers. The goal of this would be to ensure that workers in every workplace have access to a knowledgeable person who can give accurate information about substantive AI safety issues and legal rights; help with worker reporting/whistleblowing on these issues; and become an information hub for workers, regulators, and the public.
  2. Create sectoral commissions, consisting of representatives of labor and management, that would negotiate baseline AI safety standards for all firms in the sector. These baseline standards would be minimum standards across the sector and would be enforced through the operation of the impact monitors.
  3. Mandate access to a human being when an algorithm makes a status-altering decision such as firing.
  4. Ban employers from using AI to advocate against collective bargaining rights including a ban on employers from embedding messages about workers’ exercise of their collective bargaining or concerted activity rights in any AI driven interface that workers are required to use to accomplish work tasks.
  5. Require meaningful transparency and access to information about the technologies being used to monitor, manage, and surveil workers.
  6. Require the [National Labor Relations Board] to develop meaningful penalties that will deter employers from abusing surveillance technologies.
  7. Require companies to provide a safe, digital communications channel. Specifically, we recommend ensuring that whatever technology the employer uses to communicate with workers be made available to workers for their own organizing activities. In addition, we recommend that the law require employers to establish digital meeting spaces (i.e., private forums for online communications).
  8. Update [Occupational Safety and Health Administration’s] definition of “safe and healthy workplace” to include the right to be free from the harm caused by AI. 
  9. Appropriately classify workers so that gig workers can access their rights to redress grievances, organize as needed, and seek protection under current labor law and any AI-specific protections. 

According to the report, the current most common ways in which companies are using AI include task monitoring, in which AI tracks employee activities and provides real-time updates; time tracking, in which AI monitors how much time workers spend on tasks and makes sure they adhere to deadlines; and employee surveillance, in which AI-powered tools monitor how employees communicate with one another.

As an example of these common applications of AI, the CLJE team points to Humanyze, a tech company that has equipped employees with microphones (that listen to workers’ conversations), Bluetooth and infrared sensors (that monitor where workers are), and accelerometers (which record when workers move through their vibrations). 

Researchers also referenced Amazon in the report, mentioning that many of Amazon’s warehouse workers have their movements, task speed, and productivity tracked.

But while Amazon may be investing in, and implementing AI across its workforces, the company has previously insisted that its goal is not to reduce head count. “We don’t see automation and robotics as vehicles for eliminating jobs,” Amazon spokesperson Mary Kate Paradis recently told Fast Company’s Pavithra Mohan. “Our workforce has more than doubled in size since the beginning of 2019, growing to over 1.5 million people globally—all while we’ve expanded the use of robotics in our operations facilities.”

Chris Smalls, president of Amazon Labor Union, told Mohan, “If [Amazon] can create a [system] where the machinery is helping reduce injuries, I support it.” But Smalls says he hopes Amazon workers will get to participate in how AI tech is deployed. “Replacing jobs? Not so much. I just hope to see that these jobs are unionized and they have some say in how the AI and technology is being incorporated.”

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the final deadline, June 7.

Sign up for Brands That Matter notifications here.

WorkSmarter Newsletter logo
Work Smarter, not harder. Get our editors' tips and stories delivered weekly.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

AJ Hess is a staff editor for Fast Company’s Work Life section. AJ previously covered work and education for CNBC. More


Explore Topics