Fast company logo
|
advertisement

Civil rights legislation addressing the harms caused by AI could be on its way.

Fighting AI bias needs to be a key part of Biden’s civil rights agenda

[Source images: FotoMaximum/iStock; StudioM1/iStock; Caleb Perez/Unsplash]

BY Mark Sullivan9 minute read

When Ron Wyden, Cory Booker, and Yvette Clarke introduced their Algorithmic Accountability Act in 2019, which would have required tech companies to conduct bias audits on their AI, the three sponsors may have been a little early.

Despite the fact that tech companies were already using artificial intelligence algorithms for a wide range of products and services that touch the lives of millions, the issue of AI bias and its real-world effects remained abstract for many Americans, including many in Washington. The Trump administration was oblivious to the issue, and Republicans showed little interest in Wyden and Booker’s bill, which was never read in committee, much less on the Senate floor. Clarke’s companion bill in the House also didn’t advance.


This is part of a series on big ideas that Biden can tackle in his first 100 days. Read the rest of the stories:


Now, Wyden, Booker, and Clarke plan to reintroduce their bills in the Senate and House this year. A very different political environment, including a Democrat-led Congress and White House, means their ideas may receive a much warmer reception. In addition, the new version of the Algorithmic Accountability Act will arrive after some recent high-profile instances of discriminatory AI and better public understanding of the ubiquity of AI in general—along with growing awareness that tech companies can’t be trusted to self-regulate (especially when they have a habit of trying to silence their critics).

A Protocol report citing unnamed sources suggests the Wyden-Booker Senate bill is seen by the White House as a model for future AI legislation. Whether the Wyden-Booker bill—or some version of it—advances will depend on how high a priority AI regulation is for the Biden administration.

Based on the statements and actions of both President Joe Biden and Vice President Kamala Harris, there may be an appetite for finally enacting guardrails for a technology that is increasingly part of our most important automated systems. But the real work may be passing legislation that both addresses some of the most immediately dangerous AI bias pitfalls and contains the teeth to compel tech companies to avoid them.

Model legislation

The Algorithmic Accountability Act of 2019 proposed that companies with more than $50 million in revenues (or possession of more than 100 million people’s data) would have to conduct algorithmic impact assessments of their technology.

That means companies would be required to evaluate automated systems that make decisions about people by studying the system’s design, development, and training data, in search of “impacts on accuracy, fairness, bias, discrimination, privacy, and security,” according to the language of the bill.

Redlining drawn by a computer does just as much damage as redlining policies drafted by a person.”

Senator Ron Wyden

In the previous Senate bill, Wyden and Booker chose to address “high risk” AI that has the capability of seriously impacting somebody’s life if it errs or fails. Facial recognition, which so far has caused the false arrest of at least three Black men—is just one example. Specifically, the bill focuses on algorithms that make determinations from sensitive information (think personal data, financial information, or health data), from large pools of surveillance data, or from data that could affect a person’s employment.

The algorithmic audit represents an approach similar to the framework used in environmental impact assessments, where public or private entities study how a new project or technology might impact nature and people.

“Companies need to be accountable for harmful tech–after all, redlining drawn by a computer does just as much damage as redlining policies drafted by a person,” Wyden says in a statement to Fast Company. “It’s common sense to require companies to audit their systems to ensure that algorithms don’t do harm.”

“That’s why I plan to update the Algorithmic Accountability Act to incorporate feedback from experts and reintroduce it soon,” Wyden says.

One AI developer, Microsoft, supports federal legislation on ethical AI—at least in principle. Microsoft Chief Responsible AI Officer Natasha Crampton tells me that the impact assessment should be the starting point for looking deeply and honestly into AI’s real use cases and stakeholders. “That can really set you on the course to understand how the system might work in the real world, so that you can build in safeguards, preserve the benefits, and mitigate risks,” she tells me. Crampton says that Microsoft has been calling for legislation addressing ethics in AI since 2018.

The Algorithmic Accountability Act is one of the first bills to address the problem of bias from a federal level. It’s likely to be seen as an opening salvo in a long process of developing a regulatory regime for a complex technology that’s used in widely different ways.

Promising signs

President Biden and Vice President Harris said during their campaign that they intended to confront civil rights issues in multiple areas right out of the gate. One of those fronts might be the discriminatory development and use of algorithms in applications such as facial recognition.

advertisement

Harris has already engaged with the problem of algorithmic bias. In the fall of 2018, Harris and seven other members of Congress sent letters to leaders at several agencies—including the Federal Bureau of Investigation (FBI), the Federal Trade Commission (FTC), and the Equal Employment and Opportunity Commission (EEOC)—asking how they were dealing with bias in the use of facial recognition tech. The General Accountability Office reported in 2019 that the FBI had made some progress on privacy protections and facial recognition accuracy. However, the EEOC’s Best Practices for Private Sector Employees still contains no specific guidelines on avoiding algorithmic bias as the senators requested. In addition, the FTC still lacks specific and comprehensive authority to address privacy violations related to facial recognition technology, although Section 5 of the FTC Act authorizes the agency to take action on deceptive acts or practices used by developers of facial recognition tech.

We see disparate outcomes coming out of algorithmic decision-making that disproportionately affect and harm Black and brown communities.”

Acting FTC Commissioner Becca Kelly Slaughter

These details are important because the FTC would very likely be the agency most active in enforcing any new law addressing bias in AI. It’s clear the topic is important to Becca Kelly Slaughter, who Biden appointed as acting commissioner of the agency in January, as she has been outspoken on AI justice and transparency issues over the past year.

“For me algorithmic bias is an economic justice issue,” she said during a recent panel discussion. “We see disparate outcomes coming out of algorithmic decision-making that disproportionately affect and harm Black and brown communities and affect their ability to participate equally in society.”

Another promising sign: When Biden named geneticist Eric Lander as director of the Office of Science and Technology Policy (OSTP) in January, he raised that post to a Cabinet-level position for the first time. This suggests that the new administration regards science and tech policy issues such as AI ethics and privacy as equally important to other Cabinet-level concerns such as defense, trade, and energy.

Biden also appointed two civil rights lawyers to top Department of Justice posts, indicating that the agency may take a critical look at the way technologies such as criminal risk assessment algorithms and facial recognition AI are used at all levels of law enforcement. Namely, he appointed Lawyers’ Committee for Civil Rights Under Law president Kristen Clarke as assistant attorney general for civil rights, and Leadership Conference on Civil and Human Rights president Vanita Gupta as associate attorney general. Both women have brought cases against or otherwise pressured large social networks including Facebook, Google, and Twitter, and both have led landmark cases alleging algorithmic bias and discrimination.

Another promising sign is the appointment of Dr. Alondra Nelson, who Biden named as the OSTP’s first deputy director for science and society. “When we provide inputs to the algorithm; when we program the device; when we design, test, and research; we are making human choices, choices that bring our social world to bear in a new and powerful way,” she said at a White House ceremony.

“I think the creation of Alondra Nelson’s role–which is deputy director of science and society–is noteworthy,” says Rutgers Law School visiting scholar and AI policy expert Rashida Richardson. “Just that title in itself means that at least [the administration] is signaling that there is some awareness that there is a problem.”

A law with teeth

Lawmakers may be looking to address this issue just by regulating algorithms that are already in wide use and doing demonstrable harm. But equally important is passing legislation that actually prevents this harm from happening in the first place, perhaps by compelling an algorithm’s developers to actively correct problems before deployment.

Richardson fears that Congress might end up focusing on legislation that is easy to get passed but deals with AI bias in only a superficial way. For instance, she says, the government might create a set of development standards meant to rid AI of bias.

“Those are box-checking exercises, and usually lack any type of enforcement arm, but it gives the appearance that policymakers did something,” she tells me. “We won’t talk about the fact that no one followed them, and no one is monitoring to see if anyone in the industry is following them.”

“I don’t think that those types of arguments are acceptable anymore,” she says.

While the Algorithmic Accountability Act has been mostly praised in the AI ethics community, it too lacked teeth in an important way. The original bill’s language allowed AI developers to keep the results of their bias audits under wraps.

Richardson says this lack of transparency isn’t ideal “because then it will not engender change, either by those creating the technologies or those using them,” she says. “But having some type of fully public or semi-public process, or at least the option of public consultation, can allow for the same thing that happens in the environmental impact assessment framework.”

That would allow the government (or researchers, or watchdogs) to point out use cases that weren’t considered, or certain demographics that were overlooked, she says.

Wyden’s and Booker’s offices are now considering changing the language of the bill to require developers to disclose, in some fashion, the results of their audits.

The biggest headwind against getting a law passed in Congress may be the limited capacity among some lawmakers for understanding the technology itself, Richardson says. She recalls a 2019 Senate hearing she participated in that focused on optimization algorithms used by large tech companies: “It was very clear that very few senators understood the subject matter.”

“So I feel you have a problem of a lack of urgency due to the lack of understanding, and also a very myopic understanding of the scope of the issue,” she says, “which makes it hard as an outside observer to even speculate on where one would start and what is feasible.”

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the early-rate deadline, May 3.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Mark Sullivan is a senior writer at Fast Company, covering emerging tech, AI, and tech policy. Before coming to Fast Company in January 2016, Sullivan wrote for VentureBeat, Light Reading, CNET, Wired, and PCWorld More


Explore Topics