Fast company logo
|
advertisement

Sources say Schumer’s AI push is overlooking the imminent risks of AI in favor of a light-touch approach that has industry support.

Chuck Schumer’s effort to ‘get ahead of’ AI may already be falling behind

[Photo: Drew Angerer/Getty Images]

BY Issie Lapowsky7 minute read

In mid-April, Senate Majority Leader Chuck Schumer’s office announced in a press release that, after spending months discussing a possible regulatory framework with experts in the field, his office was launching a new effort “to get ahead of artificial intelligence” and develop a comprehensive piece of legislation for doing so.

But nearly one month since that announcement, sources involved in these conversations say Schumer’s office has little to show for these discussions beyond high-level ideas that fall short of what’s needed to address the most immediate risks posed by AI systems. 

“They seem to have pretty much nothing,” says one source from a civil-society group that was involved in these conversations, who was not authorized to discuss private discussions. “They could not even answer the most basic questions about the scope of the law.”

According to sources at the discussion table, the four priorities Schumer’s office outlined in these talks—protecting American innovation, requiring more disclosure from AI developers, securing data, and aligning AI systems with American values—all line up with what industry leaders, including OpenAI’s Sam Altman, have been calling for. But sources said Schumer’s office has appeared wary of regulations that might be less palatable to the industry—ideas like data minimization requirements, blanket bans on certain applications of AI technologies, and liability for harms caused by AI. It also wasn’t clear, the civil-society source said, whether this legislative proposal would apply to both government and commercial uses of AI. 

Schumer’s office did not respond to Fast Company’s requests for comment.

At the center of the concerns from people involved in these discussions is a fear shared by members of both the public and private sectors that Congress has become so preoccupied with the long-term, existential threats of AI that it’s skipping right over all of the immediate risks that AI ethicists and civil rights advocates have been warning about for years: Things like algorithmic profiling by law enforcement and AI-enabled bias in school admissions, healthcare, insurance, and other sensitive areas. 

In March, an organization called the Future of Life Institute published a letter signed by the likes of Elon Musk and Steve Wozniak that called for a “pause” on giant AI experiments and warned that the technology’s unchecked development could lead to the “loss of control of our civilization.” The Future of Life’s missive was immediately followed by a response from AI ethicists at Distributed AI Research Institute (DAIR), who countered that this kind of long-term thinking misses the forest for the trees. “[T]he focus of our concern should not be imaginary ‘powerful digital minds,’” DAIR’s letter reads, but rather the “very real and very present exploitative practices of the companies claiming to build them, who are rapidly centralizing power and increasing social inequities.”

These same tensions are playing out on the Hill. “Coming from a place of fear-mongering about future AI harms can distract from the many things that we can do in the here and now to tackle the ways that AI is affecting the lives of people around us today,” says Sarah Myers West, managing director of the AI Now Institute, which has not been part of conversations with Schumer’s office so far. 

Myers West says she has been disappointed to see how much emphasis that effort appears to be placing on interventions like AI audits, which place the onus on what will inevitably be under-resourced third parties to police the technologies of multibillion-dollar tech giants. “That’s a fairly weak foundation from which to build from,” Myers West says.

AI Now recently released its own policy agenda for addressing the risks posed by AI. It makes the case that the best way to slow down the rapid deployment of AI is to focus on the same reforms that Congress has been eyeing to address Big Tech’s power for years. That includes enforcing competition laws and curbing companies’ data-collection capabilities. AI Now also calls on Congress to adopt the kind of bright-line bans on the use of biometric data that have already been passed in local jurisdictions. 

“We have lots of evidence where regulatory friction has been introduced and has been quite effective,” Myers West says, pointing to local bans on facial-recognition technology.

One fear among sources involved in the conversations with Schumer’s office is that, in its rush to respond to the shiny new object that is ChatGPT, Congress is overlooking some of the existing policy ideas for checking AI’s power and distribution, which have already benefited from years of expert input. 

Last year, for example, the White House Office of Science and Technology Policy put out its own blueprint for an AI bill of rights, following a year of public discussions. That blueprint included among its principles the rights to data privacy and freedom from algorithmic discrimination, two concepts that sources say are being sidelined in the Schumer initiative. 

advertisement

Congress has also drafted the American Data Privacy and Protection Act, which explicitly forbids the collection, processing, or transferring of data in a manner that could lead to discrimination. That bill would also require companies to conduct algorithmic impact assessments. Civil rights advocates who worked on that legislation hope to see similar provisions in whatever Schumer’s office comes up with. “We would hope that there would be strong civil rights protections, including an anti-discrimination provision,” says David Brody, managing attorney of the Digital Justice Initiative at the Lawyers’ Committee for Civil Rights Under the Law. 

Other groups, including the Future of Life Institute, have more recently proposed ideas including introducing licensing and monitoring requirements for any company seeking to accumulate massive amounts of computing power and clearly establishing liability for developers when their AI systems cause harm. “This will drive much more responsible behavior by the developers themselves, and it also ensures that when they give explicit or implicit permission for a deployer to use their technology, they are fastidious in their development of contractual obligations to ensure that deployers are using the technology responsibly,” says Landon Klein, director of U.S. policy for Future of Life. But Klein acknowledges that the liability piece will be “more difficult to implement, in large part because we expect to see a lot more pushback.” 

While some have been disappointed with the approach Schumer’s office is taking, there are plenty of others who see value in using areas of consensus as a starting point. As former acting director of the White House Office of Science and Technology Policy, Alondra Nelson spearheaded the creation of the AI Bill of Rights. Nelson says she has not been in touch with Schumer’s office regarding his efforts, but views the focus on areas of mutual agreement as “a really strong start.”

“We know we need more transparency into data. We know we need more accountability from companies with regard to how and when they put products out and whether or not they’ve been tested and red-teamed,” Nelson tells Fast Company. “If I had a magic wand, I might venture some prioritization. But I think the thing that we can get into the end zone on the Hill is the thing that we need to try to do.”

The Majority Leader is also not alone in pursuing some sort of action on AI. In late April, top officials at the Federal Trade Commission, the Department of Justice, the Consumer Financial Protection Bureau, and the Equal Employment Opportunity Commission issued a joint pledge, committing to apply existing laws prohibiting deceptive trade practices and discrimination, among other things, to emerging threats presented by AI. “There is no AI exemption to the laws on the books, and the FTC will vigorously enforce the law to combat unfair or deceptive practices or unfair methods of competition,” FTC chair Lina Khan wrote in a statement.

The White House, meanwhile, convened a meeting last week with the CEOs of OpenAI, Alphabet, Microsoft, and Anthropic to discuss responsible approaches to AI development with President Joe Biden and Vice President Kamala Harris, among other government officials. As part of that meeting, top AI companies committed to participate in a public evaluation of their systems at DEFCON. The Office of Management and Budget is also working on draft-policy guidance that will determine how the U.S. government uses AI systems. 

That kind of norm-setting by the federal government could be instrumental, Nelson says. “The federal government can be an exemplar here,” Nelson says. “It is one of the world’s largest employers. It’s one of the world’s largest consumers. . . . That can have a pretty strong market-shaping impact.”

But ultimately, it’s Congress that gets to make the laws in the field where everyone, including the industry itself, is calling for new laws. If those new laws set a weak standard, some fear, this congressional effort to “get ahead of artificial intelligence” could only wind up setting the country back. 

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the early-rate deadline, May 3.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Issie Lapowsky is a journalist covering the intersection of tech, politics, and national affairs. More


Explore Topics