Fast company logo
|
advertisement

Here’s what I learned from my failed experiment.

I tried using ChatGPT to write this article

[Photos: Aitor Diago/Getty Images; alengo/Getty Images]

BY Rob LoCascio4 minute read

If there’s one tech trend that’s dominating the conversation in 2023, it’s generative AI. Look no further than your LinkedIn feed; it’s likely swamped with the latest hot takes on how to leverage generative AI to “work smarter, not harder,” or “10X your impact with three simple tricks!” We have the recent launch of ChatGPT to thank for opening the floodgates. As many have already said, its easy-to-use interface and quick answers bring to mind the Arthur C. Clarke quote: “Any sufficiently advanced technology is indistinguishable from magic.”

So when I sat down to write this article, I decided to open up ChatGPT and submit a prompt—just to see what it would come back with. “Write a 750-word article about how AI is becoming the ‘digital front door’ for brands,” I wrote, “and make it seem like a Fast Company article.”  

At first glance, the result was pretty impressive. Had I chosen to submit the article that ChatGPT had written, nobody would guess that a computer program spits it out in 15 seconds. (Luckily, some very smart people are already developing technology to distinguish AI-written pieces from human ones; GPTZero is a great example.) In fact, the article it returned to me hit all of the usual highlights: chatbots, personalized marketing, and using AI-powered analytics to cut costs. It also hit on the usual challenges: privacy, security, and job displacement.

I could have easily cut and pasted the content. So, ethical and journalistic standards aside (and those are very big asides), why not? Why wouldn’t I rely on this magic-feeling technology to produce my desired outcome? Here’s the thing: While ChatGPT got the basic structure of a thought leadership article right, it lacked creativity, a unique point of view, and insight. That is to say, it lacked the things that make writing and communication feel truly human. There was no spark of life in that article that would have gotten you, the reader, to look past the first paragraph. That’s certainly not the outcome I was looking for.

Generative AI should be used for better business outcomes

Of course, my experience here was just a simple experiment. Still, there are a few considerations it made me reflect on, especially as we head toward a world that’s going to see a skyrocketing number of AI-driven interactions. (Even before ChatGPT, in mid-2022 Gartner predicted chatbots would become the primary customer service channel for a quarter of organizations by 2027.) First of all, writers aren’t going away overnight. But more generally, consumers are going to seek out experiences with businesses that treat them like human beings, even if they use AI to do so. And businesses will want to turn those human-feeling experiences into better business outcomes

As the guy who invented webchat for brands back in the ’90s, I’ve spent a lot of time talking to brands about how they can create better business outcomes. To be honest, getting them to understand and be comfortable with AI has been a hurdle. The launch and excitement around ChatGPT makes that part of my job easier. Now it’s time to shift the conversation: If you’re excited about generative AI, do you know how and why to use it to drive legitimately better business outcomes? 

For example, if a customer comes to your digital front door and asks one of the following questions, the AI you’re using to staff that front door should respond in a mutually beneficial way:

I’m the kind of person who buys gifts at the last minute. What can you recommend?

I’m the traveler whose luggage you lost. How are you getting it back to me?

I’m a caregiver helping a loved one with health issues. When can I get this prescription filled?

You can’t just answer these questions with “the right words” crowdsourced from the public internet (as ChatGPT does). Instead, you should be thinking deeply about whether your AI-powered experiences are set up to deliver outcomes:

  • Is the AI just writing nice-sounding language? Not good enough. It also needs to be able to surface business insights that inform your strategy.
  • Is it trained on generally available information? If so, you’re just giving your customers the same experience as any other business. Instead, you need to make sure the unique needs and interests of your enterprise are reflected in the data set that your AI uses to drive the conversation.
  • Do you have humans in the loop to ensure your experiences are both accurate and optimized? (OpenAI outsourced this to Kenya for $2 per hour as reported by a TIME investigation, which has raised some concerns).
  • Is the AI producing nonbiased responses? Bias can be dangerous for your customers and lethal to your brand. Consider how you can start working within ethical frameworks like those developed by organizations like EqualAI

With every technological advance, there are wild promises thrown out from all corners that obscure the real work that needs to be done. Today’s “AI influencers” are no different, and it’s time to take a step back to take stock of how business leaders can cut through their hype and generate actual results. That said, there’s one more lesson from my failed article-writing project, one that has me excited about the future: Every experiment we try gets us one step closer to those better outcomes.

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the early-rate deadline, May 3.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

Explore Topics