Fast company logo
|
advertisement

Research shows a marked decline in excitement about AI this year, and a rise in concern about its societal impact.

Americans use AI more than ever, and they trust it less than ever

[Images:
vackground.com
/Unsplash; Shoeib Abolhassani/Unsplash]

BY Mark Sullivan5 minute read

Welcome to AI Decoded, Fast Company’s weekly LinkedIn newsletter that breaks down the most important news in the world of AI. If a friend or colleague shared this newsletter with you, you can sign up to receive it every week here.

Trust in AI is eroding in the U.S.

A new survey from the Pew Research Center shows that more than half (52%) of Americans are now more “concerned” about the effects of AI than they are “excited” about it. Only 10% say they’re more excited than concerned, while 36% report an equal mix of these emotions. The Pew data shows a rapid shift in public opinion about AI: Last December, only 38% of those surveyed said they were more concerned than excited about the technology. That’s a growth of 14 percentage points in just 8 months (Pew conducted the latest survey in early August).

Context could be key here. The first Pew survey was conducted shortly after OpenAI’s ChatGPT chatbot launched. And it’s possible that some of the respondents caught wind of the “surprising” conversational skills of the bot, or even had used it themselves. In the months since, however, the mainstream media has devoted lots of attention to the debate around AI’s near- and long-term dangers. It’s quite possible that some of the survey respondents were also aware of major AI researchers such as Geoffrey Hinton sounding the alarm on AI’s capabilities, or had read the industry-wide letter calling for a pause in AI development.

In fact, the latest survey results suggest that people become more skeptical of new AI systems the more they know about them. Respondents who had “heard a lot about AI” are 16% more likely now than they were in December 2022 to express greater concern than excitement about it. Among this AI-aware group, concern now outweighs excitement by 47% to 15%, Pew reports.

This all leads toward the need for sensible regulation to make sure we can reap the benefits of AI without suffering its worst outcomes. The government might, for example, require that tech companies apply for a permit before developing AI models above a certain parameter size. It might also require that all image generators create a nonremovable watermark on all the images it creates.

In a separate survey, the Pew researchers found that 67% of people who had “heard of ChatGPT” say they’re concerned that the government won’t go far enough in regulating chatbot use. On the other hand, 31% said they feared that the government would go too far, perhaps cooling the engines of innovation in the burgeoning industry.

The AI industry is a funnel to Nvidia’s bank account 

There was a time when Apple made by far the most money from the mobile computing revolution, thanks to its hardware (iPhones). Here, in the AI revolution, something similar is happening: the company providing almost all the hardware, Nvidia, is profiting the most. Nvidia, which makes the $10,000 A100 graphics processing units, used to train 95% of the big AI models, saw the AI boom coming years ago—long before ChatGPT—and invested its R&D dollars accordingly. The company also created a software layer that quickly moves data around the GPU chip, and between chips in different servers, so that the chips are sharing compute tasks evenly and working constantly.

AI companies have been scrambling this year to procure enough Nvidia servers to carry out their ambitions for building bigger models and finding ever-more lucrative applications for their AI. Dylan Patel writes in the SemiAnalysis newsletter that the AI industry has become a place of the “GPU-poor” and “GPU-rich,” with only the largest and wealthiest companies able to afford enough Nvidia servers to do worthwhile, envelope-pushing research. There’s truth to this. 

After signing a partnership deal with Google earlier this week, Nvidia’s stock prices are now up 234% this year. It’s the S&P 500’s top-performing stock of 2023. But the company’s share price has fluctuated wildly as investors have been struggling to understand how fast and far Nvidia’s rocket ride can go. “I am hearing of a lot of new funds starting to explore or get into Nvidia,” Creative Strategies principal analyst Ben Bajarin tells me. “So they [Nvidia] are expanding their investor pool because of how well-positioned they are, but that also increases volatility sometimes.” 

Was ChatGPT an accident?

Funny how things work in the tech world. The launch of OpenAI’s ChatGPT in November 2022 almost seems like the “big bang” event that set the tech industry off toward its AI future. And yet, the immediate popularity of the chatbot appears to have been an accident, or at least was unexpected, a notion OpenAI’s CEO Sam Altman and chief scientist Ilya Sutskever have both nodded at publicly. And looking back at the ChatGPT website as it appeared in November 2022, it would seem OpenAI saw the chatbot as a fun little sandbox tool for programmers. “Well, certainly we didn’t expect the success that we had,” OpenAI chief operating officer Brad Lightcap tells me. 

And it’s not like the world had never seen large language model (LLM) technology in action. Google CEO Sundar Pichai demonstrated the conversational and task-doing skills of his company’s Lamda models at Google I/O in 2021, more than a year and a half before ChatGPT showed up. Why didn’t Google release a Lambda-powered chatbot to the public? As the narrative goes, Google had serious safety and legal concerns about that. 

OpenAI may have unlocked the public’s imagination about AI by simply putting a friendly user interface on the front of a large language model. “I think in retrospect it maybe feels a little more obvious that you make these systems more personable,” Lightcap says. “You give them an interface in a format that is more intuitive for people, and people will use them.”

More AI coverage from Fast Company:

From around the web:

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the final deadline, June 7.

Sign up for Brands That Matter notifications here.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Mark Sullivan is a senior writer at Fast Company, covering emerging tech, AI, and tech policy. Before coming to Fast Company in January 2016, Sullivan wrote for VentureBeat, Light Reading, CNET, Wired, and PCWorld More


Explore Topics