Fast company logo
|
advertisement

This week we saw OpenAI call for an international AI oversight body—and got another glimpse of the tech’s dark side when a fake image of an explosion at the Pentagon spread on social media.

Does AI need a watchdog? OpenAI thinks so

[Source images: Rawpxiel, Pixabay (circuit, computer chip)]

BY Mark Sullivan4 minute read

Welcome to AI Decoded, Fast Company’s weekly LinkedIn newsletter that breaks down the most important news in the world of AI. I’m Mark Sullivan, a senior writer at Fast Company covering emerging tech, AI, and tech policy.

This week, I’m focusing on the growing push to increase oversight of companies developing generative AI (an idea that’s been endorsed by none other than OpenAI), and the White House’s own push for stronger AI regulations. 

If a friend or colleague shared this newsletter with you, you can sign up to receive it every week here. And if you have comments on this issue and/or ideas for future ones, drop me a line at sullivan@fastcompany.com.

OpenAI calls for an international AI oversight body

In a new blog post, three of OpenAI’s top executives—CEO Sam Altman, president Greg Brockman, and chief scientist Ilya Sutskever—call for the establishment of a global body that would oversee and regulate the pace of the development of “superintelligent” AI systems, which they say could present an “existential risk” to humankind. The proposed international organization, which might look something like the International Atomic Energy Agency, would also direct the AI companies to begin sharing their knowledge and best practices for keeping large AI systems safe and “aligned” with human interests. 

The OpenAI execs suggest some interesting steps by which such a global organization might track the pace of AI development: “Any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc.,” they write in the blog post.

The blog post echoes comments that Altman made during his testimony last week before a Senate subcommittee, where he called on the U.S. government to regulate the burgeoning AI industry, including the work of OpenAI. Cynics would argue that it’s easy for Altman and OpenAI to call for such forms of regulation and oversight at a moment when the U.S. and most other countries’ governments are struggling to understand the technology. 

Of course, there’s great disagreement, even in scientific circles, over the long-term “alignment” problem posed by AI systems that are far more intelligent than human beings. Some very smart people, including AI “godfather” Geoffrey Hinton, believe there’s a high risk that superintelligent systems might, in the not-so-distant future, deceive and even destroy human beings. Turing Award winner and Meta chief scientist Yann LeCun, on the other hand, believes that humans will continue to have full control over how AI systems act.

Biden administration keeps pushing toward broad AI policy

The Biden administration released a new set of plans Tuesday designed to study both the positive effects and risks of the new technology, which is maturing and proliferating far faster than most expected. As part of that plan, the White House is preparing a National Artificial Intelligence Strategy, a plan that calls for input from the public and private sectors to help inform the government’s future regulations and investments in generative AI.

The goal, according to the White House, is better understanding of everything from the national security implications of generative AI (misinformation, hacking, etc.) to its potential role in addressing climate change.

The White House Office of Science and Technology Policy (OSTP) will also be releasing an updated version of the National AI R&D Strategic Plan (that last update came in 2019), a road map that outlines the federal government’s priorities and goals for investments in AI R&D. The OSTP is asking the public for comment on the matter.

advertisement

A fake fire at the Pentagon

An image that was very likely generated using AI spread rapidly on social media Monday (thanks in part to a tweet from the Russian state media outlet Russia Today) depicting an explosion at the Pentagon. While police were quick to assure the public that no such explosion had taken place, the online furor was enough to cause a brief drop in the U.S. stock market.

The event was over shortly after it started, but it served as another sobering reminder of the tactics that new generative AI technology might enable in 2024. Next year will see major elections in the U.S., the U.K., India, Indonesia, and Russia; an estimated 1 billion people will cast votes. And there’s a good chance that various political actors will try to influence voters using high-tech disinformation created by AI, spread via social media platforms. 

AI tools greatly reduce the cost of creating new social content, so a reasonably well-funded political actor (such as the Russian state-affiliated agents that ran ads on Facebook in 2016) can try hundreds of versions of the same image and messaging until they find the right mix that moves voters to action at the polls. Neither Congress nor election oversight bodies in the U.S. have put in place new protections against such AI-generated ads, and many tech companies have laid off staff whose job would be policing such political disinformation.

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the early-rate deadline, May 3.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Mark Sullivan is a senior writer at Fast Company, covering emerging tech, AI, and tech policy. Before coming to Fast Company in January 2016, Sullivan wrote for VentureBeat, Light Reading, CNET, Wired, and PCWorld More


Explore Topics