Fast company logo
|
advertisement

A guide to the “dizzying array” of bills, frameworks, proposals, and other efforts to control and capitalize on a new generation of artificial intelligence.

Everything you need to know about the government’s efforts to regulate AI

[Photo: Getty Images]

BY Issie Lapowsky7 minute read

The public launch of ChatGPT nearly a year ago sent political leaders around the world scrambling to spur generative AI development within their own countries—and to create guardrails to ensure the technology doesn’t run amok. 

In the U.S. alone, members of Congress have proposed a whirlwind of bills—some more serious than others—intended to regulate everything from who gets to develop AI to what kind of liability companies face when things go wrong; the Biden administration, meanwhile, has issued executive orders, wrested voluntary commitments from tech firms, and begun the bureaucratic slog that is the federal rulemaking process. Overseas, the rate of progress has been even more frenzied, with the European Union expected to soon pass the world’s first set of comprehensive rules governing AI.

If the swell of proposals seems tough to keep track of, that’s because it is. The Information Technology Industry Council (ITI), a leading tech lobbying firm, tells Fast Company it is currently tracking more than 50 federal bills that have either been introduced or proposed in draft form, in addition to more than 50 initiatives—including executive orders, regulatory efforts, and more—begun by the Biden administration. And that’s to say nothing of what’s happening in the states or the endless run of hearings, public pronouncements, closed door forums, and other interventions in Washington.

“It’s really a dizzying array of things on the Hill and in the administration,” says John Miller, ITI’s chief legal officer, noting that the organization is tracking at least 150 distinct proposals. 

To help make sense of it all, we’ve outlined some of the dominant approaches to AI regulation that federal lawmakers in the U.S. are pursuing and the leading proposals pushing those approaches forward. 

Broad oversight bills and frameworks

There are a number of proposals competing to set the vision for what AI-focused legislation ought to include. These are not so much fully formed bills as they are broad frameworks that congressional leaders hope will guide the legislative process. But they’re worth watching all the same.

The most prominent of these frameworks, Senate Majority Leader Chuck Schumer’s SAFE Innovation Framework, calls for rules that ensure security and accountability, protect innovation, prioritize democratic values, and allow for information sharing between AI developers and the federal government. 

Miller, the ITI chief legal officer, acknowledges there’s “not a ton of meat on the bones at this point” in Schumer’s proposal, which was introduced back in June. But given his position as Majority Leader, Schumer will play a critical role in deciding what legislation does inevitably reach the Senate floor, meaning any priorities Schumer sets are relevant. So far, his top priority appears to be encouraging AI innovation in order to stave off competition from China. “Innovation must be our north star,” he said when he introduced the framework. That belief has prompted Schumer to take a more collaborative approach to the legislative process including hosting a recent closed-door meeting with top tech executives (which drew skepticism from those who believe industry involvement may impede substantive regulation).

Others have taken a more detailed and heavy-handed approach to designing these frameworks. Earlier this month, Sen. Richard Blumenthal, who is chair of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law, as well as Sen. Josh Hawley, the committee’s ranking member, proposed their own Bipartisan Framework on Artificial Intelligence Legislation. It includes a slew of specific recommendations, including establishing a licensing regime for sophisticated models like GPT-4 or models used for risky purposes like facial recognition. That framework also calls for transparency measures, export controls, and protections for minors.

Several other bills have been proposed or floated in draft form, attempting to create some form of oversight over the industry, including The National AI Commission Act, proposed by Reps. Ted Lieu, Ken Buck, and Anna Eshoo, as well as Sen. Brian Schatz, which would create a new bipartisan commission to study the country’s current approach to AI and issue recommendations. 

Skeptics say many of these proposals are merely re-treading old turf. After all, the White House has already published its Blueprint for an AI Bill of Rights, which was the product of a year of conversations with government, technology, and civil society leaders. The National Institute of Science and Technology also has its own AI Risk Management Framework. “We are not in a place to completely start from scratch,” says Ben Winters, senior counsel at the Electronic Privacy Information Center. “We know, as a country, what we need to do.” 

Bills protecting global competitiveness and security

One driving force behind the regulatory interest in AI is the fear of this powerful technology being developed or abused by a foreign adversary. To that end, a number of bills have been proposed that are designed to protect U.S. security interests and competitiveness around AI and other breakthrough technology. Earlier this year, Sen. Mark Warner, who chairs the Senate Intelligence Committee, and Sen. John Thune, introduced the bipartisan RESTRICT Act, which would empower the Department of Commerce to review and prohibit any technology transactions with foreign powers that might pose an unacceptable national security risk. While the bill’s announcement focused on TikTok, Miller of ITI says it could just as easily apply to AI tools and systems.

Warner also joined with Sens. Michael Bennet and Todd Young on the Global Technology Leadership Act, a bipartisan bill that would create a new Office of Global Competition Analysis that assesses which technologies matter most to national security and how U.S. capacity stacks up. Meanwhile, the bipartisan, bicameral CREATE AI Act would establish shared computational resources for researchers and students who might not have access to the computing power needed to train large AI models. The goal is to democratize access to AI development to give developers in the U.S. an edge.

advertisement

Bills targeting AI and Section 230

Section 230 created the internet as we know it. It’s the law that both ensures tech platforms won’t be liable for the content that their users post and protects platforms’ ability to moderate content as they see fit. But while Section 230 shields platforms from being held accountable for third-party content, the question of whether Section 230 would also protect a company like OpenAI from being sued for something that ChatGPT writes is a different one altogether. After all, ChatGPT is generating that content all on its own. 

OpenAI’s CEO, Sam Altman, for one, has already said that Section 230 is not “the right framework” for AI, and many experts agree Section 230 would not apply to a tool like ChatGPT. But Sens. Blumenthal and Hawley want to leave nothing to chance. In their No Section 230 Immunity for AI Act, introduced in June, the lawmakers propose amending Section 230 to explicitly strip immunity from AI companies in claims regarding the use of generative AI.

Bills focused on transparency

Several bills have emerged taking aim at the issue of transparency in AI. Though they take different forms, they would all impose some sort of disclosure requirements about when AI systems are being used in certain ways. For instance, Sen. Gary Peters, chair of the Homeland Security and Governmental Affairs Committee, introduced the TAG Act, which would require more disclosures when people are interacting with automated systems run by government agencies. It would also create a government appeals process for people when they’ve been on the wrong end of a critical decision made by AI systems. 

A month earlier, Rep. Yvette Clarke introduced The REAL Political Ads Act, which would expand disclosure requirements for political ads to include information about when ads use AI generated videos or images. Lately, some lawmakers have gone even farther than that. Earlier this month, a bipartisan group of senators introduced legislation that would prohibit the use of AI to create deceptive audio, images, or video of candidates for election ads altogether. 

Regulatory actions focused on consumer protection

Outside of Congress, federal agencies are also hard at work applying existing regulations to new AI technology. The Federal Trade Commission, Consumer Financial Protection Bureau, Department of Justice, and the Equal Employment Opportunity Commission issued a joint statement earlier this year, announcing their intention to use their existing authorities to “protect individuals’ rights regardless of whether legal violations occur through traditional means or advanced technologies.”  

The FTC has already opened an investigation into OpenAI to find out if it has engaged in “unfair or deceptive” practices that may have caused harm to consumers. The National Telecommunications and Information Administration, meanwhile, asked for comments earlier this year on what AI rules it could implement on its own without new legislation.

Voluntary safety commitments from AI companies

At the same time these proposals are swirling around D.C., the White House has already secured a number of voluntary safety commitments from leading tech companies, including Anthropic, Google, Adobe, IBM, OpenAI, and others. The commitments ranged from agreeing to submit their systems to internal and external security screenings to vowing to develop tools to identify when content is AI-generated.

Such promises are encouraging, but they also rely on companies’ self-regulation. To truly hold these companies to those commitments, it will take the force of law. For now, that’s still a work in progress.

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the final deadline, June 7.

Sign up for Brands That Matter notifications here.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Issie Lapowsky is a journalist covering the intersection of tech, politics, and national affairs. More


Explore Topics