Fast company logo
|
advertisement

Under proposed regulatory language, California consumers could soon refuse automated decisions in areas like housing, education, and employment.

New language in California’s privacy law would radically increase consumers’ power over AI

[Photos: Alex Proimos/Wikimedia Commons; Markus Spiske/Unsplash]

BY Wilfred Chan4 minute read

Californians could soon have the right to “opt out” of a wide range of encounters with automated systems, based on newly proposed regulatory language that would offer residents some of the nation’s strongest safeguards against the use of artificial intelligence.

Under the proposal, California consumers could decline to be subjected to any business’s automated decision-making system in areas including housing, education, employment, criminal justice, healthcare, insurance, lending, or “access to essential goods, services, or opportunities.”

Consumers would also be able to opt out of any technology that monitors workers, job applicants, or students, tracks consumers in public places, processes the personal information of anyone younger than 16, or uses consumers’ data to train automated software.

The draft language was presented earlier this month by the California Privacy Protection Agency, a board given legal authority by a 2020 ballot proposition to write regulations for automated decision-making systems, expanding on the landmark California Consumer Protection Act enacted in 2018. But tech industry lobbyists are protesting the new language, claiming it could end up limiting technology as ordinary as Excel spreadsheets.

The opt-out provision “will essentially become a de facto ban on algorithms in the state of California,” says Carl Szabo, vice president and general counsel at NetChoice, a trade association founded by Meta whose members include Amazon, Google, Lyft, Airbnb, and TikTok.

Digital privacy advocates say the rules are necessary to protect consumers as AI seeps into more of our daily lives, from software that ranks job applicants to formulas that determine whether a prisoner should be given parole. “Every day these decisions will determine what schools you can go to, what kind of credit you get, what kind of healthcare you can get,” says Vinhcent Le, a tech equity lawyer who sits on the five-member board (one seat is currently vacant). Regulating these automated decisions is “critical, especially as these systems become more and more commonplace,” he says.

But there’s intense disagreement between rights advocates and industry lobbyists about how automated decision tools should be defined.

Le, who favors a broad definition to prevent loopholes, says “there’s no perfect way to define an automated decision system that’s going to make everybody happy.” The board’s draft language casts a wide net, defining automated decision-making technology as “any system, software or process . . . that processes personal information and uses computation as whole or part of a system to make or execute a decision or facilitate human decision-making.” Californians would have the right to opt out from these systems, the board said in its presentation.

That kind of language reaches too far, says Szabo, the NetChoice lobbyist. “If you use one of those Coca-Cola machines where you can hit a button saying ‘Surprise me,’ if you’re using an Excel spreadsheet and you ask it to do intelligent summation, if you have services like auto-complete, those are all forms of automated decision-making,” he says.

Tech industry groups say that any process in which a human has the final say shouldn’t be considered an “automated decision tool” that needs regulation. That argument helped weaken high-profile legislation like New York City’s recently enacted Local Law 144, which targets software used by employers in hiring decisions. Unlike California’s draft language, New York City’s law limits the definition of automated systems to those that outweigh or overrule human decision-makers. Le says that fails to account for a phenomenon called automation bias: “Even if there’s a human in the loop, they’re just pressing ‘Okay’ to whatever the tool’s recommendation or output is,” he says.

advertisement

Just as important as defining automated decision-making tools is how to regulate them. 

Privacy advocates have blasted New York’s law for treating businesses with kid gloves, requiring them only to notify jobseekers and employees that automated decision tools are being used, without giving those people an alternative.

Not so with California’s proposal, which would offer consumers extensive opt-out rights—though the board has yet to say what those would be. Le acknowledges that it would be “impossible” to let consumers refuse every kind of automated technology, like fraud protection at a bank, for example. But the board is weighing different definitions for the opt-out, including one that would replace an automated tool with a human review—modeled after a similar provision in Colorado—or creating exceptions for certain services. “There is a lot of thought being put into how we make the contours of the opt-out practicable,” he says.

So when would the new rules take effect? There’s currently no given timeline, but any final language would have to be approved by the full board, which in addition to Le includes two data privacy lawyers and Alastair Mactaggart, the real estate developer-turned privacy advocate who helped author the California Consumer Privacy Act. The public, lobbyists included, will have more opportunities to offer comments, but the board will have the final say.

Despite tech companies’ protests, Le believes that a broad definition of automated tools and strong opt-out language will make it into the finished rules. “Companies can comply with these regulations. They can figure out ways to do it. They just don’t want to,” he says. “In California, it’s already passed, so the writing’s already on the wall.”

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the final deadline, June 7.

Sign up for Brands That Matter notifications here.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Wilfred Chan is a Fast Company contributor whose work also appears in The Guardian, The Nation, and New York. More


Explore Topics