Fast company logo
|
advertisement

Two of the biggest global markets have recognized the risk of waiting for regulation—but will their stopgap measure make a difference?

Will the EU-U.S. new voluntary code of conduct on AI work to rein in the tech?

[Images: NSA Digital Archive/Getty Images; Guillaume Périgois/Unsplash]

BY Chris Stokel-Walker3 minute read

Big tech moves fast, and with AI supremacy being such a grand prize for businesses, companies are putting huge resources into developing new tools and models. But big tech also abides by the motto “move fast and break things”—which politicians and regulation generally try to avoid.

Which may explain why the EU and U.S. announced this week they were developing a code of conduct which European Commission vice president Margrethe Vestager said a draft of which would be developed “within weeks.”

Both the EU and U.S., alongside a slew of other countries, are currently considering creating fully fledged rules to regulate AI. Earlier this year, the U.S. government put out a consultation on what those rules should look like; the EU has agreed in principle on the language of an AI Act, which will go to a vote of the European parliament in June. This joint effort will see two major global players who have traditionally not always seen eye-to-eye when it comes to regulating tech working together to develop rules they believe ought to be followed by global governments and AI giants.

The code of conduct appears designed as a stopgap measure designed to lay down lines in the sand while waiting for the passing and implementation of Europe’s rules, says Andres Guadamuz, a law and technology academic specializing in AI at the University of Sussex. “It’s an interesting development, particularly given that the EU is in the middle of passing the AI Act, which will be a considerable piece of regulation.”

It also is indicative, Guadamuz thinks, of the EU trying to nudge the U.S. toward intervening more quickly in the world of AI—especially important, given most of the leading AI companies are based in the U.S. “To me this is recognition from the EU that the U.S. is unlikely to do something similar, so they may as well try to get some form of soft regulation,” he says.

The geopolitical development is one that should be welcomed as a starting point for further work, says Catalina Goanta, associate professor in private law and technology at Utrecht University. “Even if a code of conduct lacks teeth, it is a good international relations exercise for the development of more international standards,” she says. “It’s a reality that technology has brought about new implications and connectivities related to globalization, some of which we are merely beginning to understand.”

However, Goanta worries that the code of conduct will ultimately be less powerful than its creators hope, as she says was the case with the EU’s code of practice on disinformation, which largely works in the platforms’ favor. Absent acts of parliament or government, all codes are voluntary, and only as good as the behavior of the companies who voluntarily sign up to them. If a company were to withdraw from the code of conduct on AI, there at present wouldn’t be any regulatory requirement to keep companies honest. “Codes of conduct and self-regulation have been rightly derided by internet regulation experts,” says Guadamuz. “They’re often ignored, or as with Twitter, abandoned if it becomes convenient.” Elon Musk’s social media platform just this week withdrew from the EU’s voluntary agreement on disinformation (though the company is still beholden to the EU’s Digital Services Act, which places statutory requirements on it to tackle fake news).

Something, then, is needed in the absence of real regulatory might. The EU’s AI Act appears to be the most advanced set of rules designed to rein in AI, moving into a vote at the European Parliament this month. However, that parliamentary stage is just the start of the legislative process; if all goes well, the act would still likely not be implemented until 2025, based on the timeline for similar legislative initiatives in the past. Goanta is unconvinced that the speed at which those considering the code of conduct believe they can draw up the rules will be achievable—at least not in the depth required to be meaningful. 

Meanwhile, things continue to move quickly in the world of AI. In the last week, some of the field’s biggest names, including OpenAI CEO Sam Altman and Microsoft’s chief scientific and technology officers, have signed a statement warning that the technology could be an existential risk. 

Goanta worries that officials are actually reacting too hastily to these recent warnings, rushing to put forth some kind of agreement before they’ve fully understood the tech itself. “Drafting an AI code of conduct within weeks?” she says. “Forget about it.”

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the early-rate deadline, May 3.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Chris Stokel-Walker is a freelance journalist and Fast Company contributor. He is the author of YouTubers: How YouTube Shook up TV and Created a New Generation of Stars, and TikTok Boom: China's Dynamite App and the Superpower Race for Social Media. More


Explore Topics