Fast company logo
|
advertisement

Mental-health counseling, text-based AI, and the future of your relationship with your therapist.

People are using AI for therapy, whether the tech is ready for it or not

[Photos: 4×6/Getty Images; Billy Huynh/Unsplash]

BY Ryan Broderick6 minute read

OpenAIs artificial intelligence tool ChatGPT launched last November and quickly set off a panic across all sectors of society. Programmers are using it to code whole websites. Professors are worried it can write papers—though it seems like professors can use it just as easily to grade them. There has even been a news outlet that used it covertly to write SEO blog posts. 

But ChatGPT is powerful enough that Microsoft, which owns a $1 billion stake in the company that created it, OpenAI, plans to integrate it into its search engine Bing. Which means were only at the beginning of understanding what ChatGPT and similar text-based AI technology can do.

For instance, could it ever replace your therapist?

Its a fairly uncomfortable idea to consider, but users—and companies—are already experimenting with what an AI mental health professional might look like.

Earlier this month, Robert Morris, the cofounder of Koko, an online mental health services not for profit, caused a huge commotion when he announced on Twitter that Koko had “provided mental health support to about 4,000 people—using GPT-3.

Koko, at its simplest, is a company that helps online platforms guide users who might need them towards mental health resources. For example, if you search “thinspo—a term used for cataloging content glamorizing eating disorders—on a site like Pinterest, Koko is responsible for the technology that provides you with a link to the National Eating Disorders helpline. Koko, itself, does not use GPT-3 technology. 

When I catch up with him, he tells me that he “misspoke in his initial Twitter thread. But he says, “Its kind of my lifes work to figure out, at this moment, how we can bring the best service possible to a person.

Morris did run an experiment integrating a GPT-3 AI with a Discord bot called Koko bot, which would allow users to anonymously give each other advice with the option to have AI help them. You could ask Koko bot for help and tell it about a problem youve been facing. Another user could tell Koko bot that it would like to help someone. Koko bot then sends the problem you’ve written to someone else, who then can give some feedback. Think of it like an anonymous advice column matching system. Koko bot then gave users the option to have “Koko bot answer. The AI would write up a response, a human would approve it, and it would be sent back to the original poster.

“What happened was, when we launched this, the people receiving it got a message that said, Someone replied to you, cowritten by Koko bot, and they really liked it, Morris says. “Like, every response you get, you can rate good, okay, ‘bad. And the ratings were much better with GPT-3.

But, according to Morris, there were problems over time with using an AI this way. Namely, AI-generated text starts to feel, well, like AI-generated text. “For me, and my experience of using this and playing with this for a year and actually trying it, I honestly dont feel that much, Morris says. “When I get this stuff, its like, Yeah, that feels right. But I dont feel supported.

Morriss experiment, at least the way it was phrased in his Twitter thread, was not received well. It was lambasted on by other users and turned into multiple news stories. Though, for what its worth, users of Koko bot knew that the bot was helping them write and receive these messages.

But the Koko story was a great illustration of how much trust is required in any kind of effective mental health treatment, and its unclear exactly how much trust we should be putting in the tech companies developing these services. “ChatGPT and its progeny will be used to harness private mental health data of many unsuspecting consumers for more personalized ads and consumer profiling, tech commentator Michael Kevin Spencer tells Fast Company.

Spencer writes the AI Supremacy newsletter and is worried about Big Tech moving further into healthcare. “The operating system as therapist has already been leaked into the collective psyche from films like Her (2013), and ChatGPT reminds us all of this reality, he says. “I fear that our mental health and the industry will be exploited by AI and our sensitive data mined for nefarious ends.

advertisement

But its not just companies that are experimenting with letting their users try out this technology. Users are doing it themselves. Reddit users are sharing tips on how to “jailbreak ChatGPT so it can give therapeutic answers to queries.

“This conversation is a human intelligence debugging an artificial intelligence to debug human intelligence, one user wrote. “If a therapist is a mirror, this is a hall of mirrors. Pretty cool.

Dan Shipper, CEO of the newsletter company Every, uses an AI not as a replacement for therapy, but as a way to gain more insights from his therapy sessions. “I have a therapist, Shipper says. “He knows about what I do with this, and I do it with him.

Shipper records his sessions, transcribes them with OpenAIs transcription software Whisper, asks an AI to generate summaries, and then he and his therapist go over the summaries together. “I think it makes apparent patterns that come out of the session that maybe you kind of know are there, but that haven’t been fully and explicitly said out loud, he says. “Its been very helpful for me. And I think hes quite curious about it, too.

And Shippers therapist isnt the only one thats curious about how AI might interact with their profession. Dr. Finian Fallon, a psychotherapist from Ireland, says that the emerging world of AI therapy is as exciting as it is scary. “There are many ethical questions yet to be resolved, but even if the ethics don’t like AI therapy, there are plenty of people who can’t access or can’t afford person-to-person therapy and will seek whatever help they can find, he says.

Fallon likens therapists currently resisting integrating AI with a similar reluctance to conduct therapy online or over video five years ago. “We therapists need to get used to the realities of what the market can and will provide and what people need, he says. “What Ive seen so far of AI therapy is that it equates to the skill level of a relatively new therapist, but that a more sophisticated, or perhaps a more personally meaningful, interaction might be needed by clients after an initial experience.

Fallon also shrugs off concerns that AI might start recycling advice or therapeutic techniques, the same way that generative-AI art programs tend to spit out rehashed images. “A focus on AI will let us discover how important the relationship in therapy really is and whether people need other people to heal, or whether being understood and advised by an AI program is just as effective, he says. “Its time we asked and answered these questions.

While Fallon is correct that we dont have answers to those questions, its also not assured that well like them when we find them. But the possibilities are captivating, nonetheless.

“AI may expose many more people to the experience of therapy, Fallon says, “and if it is done safely and ethically it might help us understand the human condition more clearly.

Interestingly enough, ChatGPT itself is a bit more skeptical of its ability to act as a human therapist. I asked it what it thought about humans trying to use it for therapy, and it conceded that it lacks a few important things a good therapist needs. Most importantly, it can’t hear or see its patients.

“It is unlikely that AI can fully replace human therapists as the therapeutic relationship is built on trust, empathy, and understanding, which are qualities that are difficult to replicate with current AI technology,” it replied. “Additionally, human therapists are trained to recognize and respond to nonverbal cues, which AI is not yet able to do.”

Recognize your company's culture of innovation by applying to this year's Best Workplaces for Innovators Awards before the extended deadline, April 12.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Ryan Broderick is a tech journalist who writes the Garbage Day newsletter and hosts the podcast The Content Mines. More