Fast company logo
|
advertisement

The Center for AI and Digital Policy has filed a federal complaint, adding to a wave of criticism against the new generative AI.

Backlash is growing against GPT-4, and one AI ethics group wants the FTC to step in

[Photo: Getty Images]

BY Clint Rainey2 minute read

An ethics group devoted to artificial intelligence has declared GPT-4 to be “a risk to public safety,” and is urging the U.S. government to investigate its maker OpenAI for endangering consumers.

The Center for AI and Digital Policy (CAIDP) filed its complaint on Thursday with the Federal Trade Commission (FTC), on the heels of an open letter earlier this week calling more generally for a moratorium on all generative AI. Some 1,200 researchers, tech executives, and others in the field signed that letter—including Apple cofounder Steve Wozniak, and (somewhat more head-scratchingly) OpenAI cofounder Elon Musk. It argued for a minimum of a six-month pause on progress to give humans a chance to step back and do a cost-benefit analysis of this technology that’s developing at breakneck pace and enjoying runaway success.

Marc Rotenberg, president of CAIDP, was among the letter’s signers. And now his own group has piled on by making the case that the FTC should take a hard look at OpenAI’s GPT-4—a product that presents a serious enough liability for OpenAI itself to have recognized its potential for abuse in such categories as “disinformation,” “proliferation of conventional and unconventional weapons,” and “cybersecurity.”

“The Federal Trade Commission has declared that the use of AI should be ‘transparent, explainable, fair, and empirically sound while fostering accountability,'” the complaint says. “OpenAI’s product GPT-4 satisfies none of those requirements,” it adds, before essentially calling the government to arms: “It is time for the FTC to act.”

GPT-4’s alleged risks in CAIDP’s complaint include the potential to produce malicious code, reinforce everything from racial stereotypes to gender discrimination, and expose users’ ChatGPT histories (which has happened once already) and even payment details. It argues that OpenAI has violated the FTC Act’s unfair and deceptive trade practices rules, and that the FTC should also look into GPT-4’s so-called hallucinations—when it falsely and often repeatedly insists a made-up fact is real—because they amount to “deceptive commercial statements and advertising.” CAIDP argues OpenAI released GPT-4 for commercial use “with full knowledge of these risks,” which is why a regulatory response is needed.

To resolve these issues, CAIDP asks the FTC to ban additional commercial deployment of the GPT model, and demand an independent assessment. It also wants the government to create a public reporting tool like the one consumers can use to file fraud complaints.

GPT-4 has attracted a near-messianic following in certain tech circles—a fervor that probably amplified the need critics feel to sound the alarm over generative AI’s ubiquity in culture. However, OpenAI’s way of carrying itself has also given critics ammo. OpenAI isn’t open source, so it’s a black box, some complain. Others note that it’s copying tech’s worst impulses in the areas that are visible, like using Kenyan laborers who earn less than $2 per hour to make ChatGPT less toxic, or by seemingly hiding behind a “research lab” halo to ward off calls for greater scrutiny.

OpenAI seems to have understood these stakes, and even predicted this day would come. For a while, CEO Sam Altman has been addressing broader fears of AI essentially being let off leash, admitting that “current generation AI tools aren’t very scary,” but we’re “not that far away from potentially scary ones.” He has acknowledged that “regulation will be critical.”

Meanwhile, Mira Murati, who as CTO leads the strategy behind how to test OpenAI’s tools in public, told Fast Company when asked about GPT-4 right before its launch: “I think less hype would be good.”

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the early-rate deadline, May 3.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Clint Rainey is a Fast Company contributor based in New York who reports on business, often food brands. He has covered the anti-ESG movement, rumors of a Big Meat psyop against plant-based proteins, Chick-fil-A's quest to walk the narrow path to growth, as well as Starbucks's pivot from a progressive brandinto one that's far more Chinese. More


Explore Topics