Fast company logo
|
advertisement

POV

Artificial intelligence has a brewing crisis of trust

If the AI industry doesn’t work to build consumer confidence, it’s never going to change the world.

Artificial intelligence has a brewing crisis of trust

[Photo:
Milad Fakurian
/Unsplash]

BY Fast Company Staff7 minute read

You could be forgiven for thinking that generative AI came out of nowhere. Sure, the last two years introduced us to the likes of DALL-E and Midjourney, and we were all impressed by their ability to conjure vivid scenes from written prompts. But—without insulting the creators—they were toys, and largely used as such. 

After the advent of ChatGPT, suddenly, we were using algorithms to write essays, research complex topics, write code, and even craft poems. Suddenly, things didn’t seem so trivial. Generative AI was useful, and it was everywhere. 

Google quickly released its own generative AI offering. So did Mozilla. And Grammarly. And Discord. And seemingly every other tech company under the sun.

For those not blinkered by the technological marvel before them, this pace of adoption was as rapid as it was unsettling. A technology—unproven and unreliable—was being shoehorned into every corner of the web with little thought to the potential costs and harms. 

The Problem of Overnight Success

In the right hands, generative AI has the potential to deliver meaningful benefits to every corner of society, from businesses and individuals, to students and teachers. 

But I do think it is important to recognize the potential dangers that generative AI can cause, particularly in its earliest and most unrefined states. With technology companies racing to bring generative AI products to market, not enough attention has been paid to the potential harms posed by this new technology, and how to mitigate against them.  

It’s in the best interests of tech companies—not to mention consumers and society at large—to address these potential issues now. Avoiding harm is a virtue in itself, but it’s also good business sense. A problem deferred is often a problem amplified. Given the bipartisan loathing of Big Tech, one foul-up could see the weight of the federal government (or, equally likely, the European Commission) come crashing down on the nascent generative AI space, with long-term repercussions.

It’s not merely that these products have the potential to fundamentally reshape the economy—and, in particular, the job market—faster than legislators and central banks can react. Admittedly, that’s a huge problem and one that demands immediate scrutiny, followed by decisive action. More pertinent is the effect these generative AI models may have on our society, and especially our democracy. 

The cost and complexity involved in developing a product like ChatGPT or LaMDA means that it’s only really within reach for a handful of technology firms. And so, we’re faced with a best-case scenario where the competitive tech ecosystem is replaced with an oligopoly of Google, Microsoft, and OpenAI. The worst-case scenario would see these companies leverage these products to influence public sentiment, using their trust and market dominance to shape legislation and elections in their favor. 

Generative AI’s growing pains

Given the technological marvel on show, it’s all too easy for consumers to be blinded to the fact that these models aren’t infallible. The launch of Google’s Bard product demonstrated this, with the algorithm providing manifestly wrong answers to simple questions. Bing has faced similar problems, providing incorrect summaries of financial reports and vacuum specifications, and in one case, even getting the date wrong.

These early conversational generative AI models often suffer from the same human flaw endemic within cable news hosts or first-year philosophy undergraduates. By this, I mean they’re loathe to admit they don’t know the answer to a question. And so, they make things up, providing answers that, on a superficial level, sound right, but upon closer inspection are far removed from reality.

In essence, generative AI is the embodiment of Stephen Colbert’s concept of “truthiness,” which is defined as “the quality of seeming or being felt to be true, even if not necessarily true.” While it’s possible to dismiss this as a kind of growing pain that accompanies every early-stage technology, the unfortunate reality is that we’re not treating generative AI as though it’s in Alpha. 

ChatGPT is being used by businesses, students, and even media organizations with the expectation that it is a polished, complete product that can be relied upon. Technology website CNET even used it to write articles on personal finance, only to have to correct multiple articles after the algorithm produced multiple factual inaccuracies. CNET’s woes were compounded further due to its lack of transparency, with no disclosure that the articles were algorithmically generated. Each blog was bylined with “CNET Money Staff,” despite no human having actually written the content.

With generative AI still at the earliest stages of development, the risk of a high-profile mistake—one that provokes action from lawmakers—is high. If the industry is unable to mitigate risk or engage productively with lawmakers, it may find itself facing a crackdown that defines the trajectory of the technology for decades to come.

This isn’t theoretical. What I have described has occurred with other industries, from crypto to vaping.

Learning from history

Let’s take Juul for example. Founded by two Stanford graduate students, the company aimed to provide a convenient and stylish alternative to combustible tobacco for adult smokers. The company launched its first product in 2015 and quickly became a runaway success, with Altria—the maker of Marlboro cigarettes—acquiring a 3% stake for $12.8 billion. At its peak, Juul was valued at $38 billion. 

advertisement

But it wasn’t to last. Admittedly, the company had problems of its own making, particularly when it came to its marketing materials, which some lawmakers believed were inappropriately appealing to underage consumers. Additionally, Juul had made claims about the safety of its product and its efficacy as a smoking cessation device that violated FDA rules. But these weren’t the straws that broke the camel’s back. 

Around the same time, the wider vaping industry was facing scrutiny over safety concerns, and the accessibility of e-cigarettes to children. A legislative crackdown followed, with Juul forced to discontinue certain flavors and nicotine strengths, and in 2022, withdraw from the U.S. market entirely (although this was halted by a court order). The same year, Juul was valued at just $450 million, down 96.5% from its all-time high. 

I mentioned Juul for a good reason. It’s an example of a rapidly growing and highly popular product in a space that was experiencing massive growth, only to fly too close to the sun. If  you recklessly continue to harm people because it makes you a profit, legislators will shut you down and you won’t even make the money you think you’ll make.

The same process is arguably happening to crypto right now. Massive success preceded catastrophic failure. Like generative AI, crypto lacks a trusted figurehead who can engage with lawmakers and skeptics. The closest it came was Sam Bankman-Fried, who lobbied for moderate crypto legislation and gave generously to both sides of the aisle, only to emerge as a uniquely reviled figure who perpetrated one of the biggest frauds in American corporate history.

Bankman-Fried’s fraud didn’t merely harm consumers. It led to the collapse of Silvergate Bank—a crypto-friendly financial institution with three decades of history. Investors like Sequoia Capital and Galois Capital were forced to write off multimillion-dollar investments, with the former losing an eye-watering $213.5 million. 

It’s impossible to imagine a scenario where lawmakers and regulators don’t respond by imposing punitive regulations on exchanges and other Web3 technology companies. These rules will almost certainly be intended to limit the risk of another FTX-size calamity. But they’ll have the knock-on effect of limiting the types of products available to consumers and businesses, and the pace of innovation in the DeFi and crypto space.

A better path

The key difference between generative AI and the previous two examples cited is the public consensus around its benefits—and I say that as someone who passionately believes in the fundamentals of decentralized economics. 

While we’re yet to fully fathom the impact generative AI will have on our society, it will undoubtedly be significant, and most likely, positive. It has the potential to democratize knowledge and learning, allowing anyone to obtain a simple explanation for any question at a moment’s notice. It will help businesses become more productive, and thus, more profitable. They’ll iterate faster, respond to market needs with increased agility, and provide high-quality customer service at a moment’s notice, and at all times. 

AI could transform the world for the better. But we’ll only get there if we take proactive steps to limit potential harms. Tech has been reaping the consequences of “move fast and break things,” and this time, when so much is on the line for the good of society, it’s more important than ever to remember: With great power comes great responsibility. And with great screwups come greater crackdowns.


Michael Gao is founder and CEO of Fabric Cryptography, a cryptography hardware accelerator company.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

Explore Topics