advertisement
advertisement

This chatbot teaches counselors how to talk to LGBTQ kids in crisis

The Trevor Project partnered with Google.org to develop the Crisis Contact Simulator, a winner in Fast Company’s 2021 Innovation by Design Awards. Now it’s being used to train operators on a suicide-prevention hotline.

This chatbot teaches counselors how to talk to LGBTQ kids in crisis
advertisement
advertisement
advertisement

I’m staring at a chat window like any other, but my stomach is tight as I see the little chat bubbles indicating that someone on the other end is typing. They say their name is Riley. And since this is an LGBTQ help line, I know that Riley is in trouble.

advertisement
advertisement

“I’ve been wanting to come out as genderqueer to my parents. I thought that I would first try to come out to my friends. . . . I came out to a few of them last week and it didn’t go so well. . . . It was just awful. . . . They acted like I was such a freak.”

What can I even say to Riley in this moment that doesn’t sound trite or pointless or too familiar? That’s precisely what I’m about to learn. Riley is not a real person, but an AI (artificial intelligence) persona. And this chat? It’s a training tool called the Crisis Contact Simulator, developed by the Trevor Project—the nonprofit focused on suicide prevention for LGBTQ youth, which runs both a 24/7 hotline and a chat service for anyone who needs support.

“Our mentality is that no crisis is too large or too small,” says Daniel Fichter, head of AI and engineering at the Trevor Project. “A lot of important work in suicide prevention happens when people aren’t immediately suicidal, but they find it would be helpful to talk about where they’ve been and might be going emotionally.”

advertisement
advertisement

In a world in which most chatbots are used for mercenary reasons like retail cost cutting and phishing, the Crisis Contact Simulator is a landmark project. It won our 2021 Innovation by Design Award for Social Good because it leverages the seamless user experience of automation to help the Trevor Project’s training staff onboard more counselors.

The AI tool is a digital replica of the nonprofit’s existing training regime. Traditionally, a member of the staff role-played a character named Riley, an intentionally gender-nonspecific person, to train new recruits. This dynamic stage play came in several versions—Riley could vary in age, identity, background, and suicide risk—and talking to each different Riley offered a vital way to let volunteers practice and hone their skills at counseling before they were tasked with talking through real crises.

[Image: The Trevor Project]
But building Riley’s digital doppelgänger was a challenge. This was a chatbot that needed to be able to simulate rich conversations about deep emotions, and do so for nearly an hour without breaking character.

advertisement

“It was important for us that Riley could go anywhere realistically the counselor takes the convention. It’s important for the tone and emotional cadence of the conversation to be true to Riley’s emotional state,” Fichter says. “It’s also important that the counselor can learn to feel comfortable staying with Riley’s hard feelings, asking more about them. And for Riley to be comfortable answering those questions if the trainee asks in a nonjudgmental way.”

This sort of dynamic conversation would not have been possible to build with conventional chatbots, which use pre-scripted, branching conversations built upon key phrases to simulate discourse—with responses that are often open-ended or intentionally vague to allow fuzzier interpretation. Imagine a teen typing a phrase like “My best friend outed me in front of my class, and now I want to drop out” and a chatbot responding “It’s nice to have friends” or “School is nerve-racking!” or “It’s important to get a degree.” That conversation as the chatbots of yore were constructed might flow logically, to some small extent, but it would hardly flow emotionally.

Instead, Fichter’s team began with open-source AI models, which can learn how to converse simply by observing many, many conversations. Alongside volunteers from Google.org (including a computational linguist), they fed the Trevor Project’s own Riley training transcripts into the machine to train the digital Riley persona.

advertisement

What ensued was nine months of testing and tweaking before engineers finally found a formula that created a convincing Riley. After launching the training tool in January 2021, the Trevor Project has continued to improve it by asking trainers to read through the AI transcripts to offer suggestions. Basically, that means the people who used to play Riley are now teaching the AI Riley how to act more like Riley. “There’s a natural feedback loop, where if the instructors wish the simulator did anything differently, their eyes are on the transcript, and they can let us know,” Fichter says.

Today, the AI Riley offers the first practice session to dozens of new trainees entering the fold every month at the Trevor Project. And unlike its human counterparts, Riley can accommodate any schedule. That’s important for volunteers, since 68% of the counselors take night and weekend shifts, which are often on top of a 9-to-5 paid job. Meanwhile, the Trevor Project’s own staff has more time to focus on other vital work inside the organization.

“It meant a lot to us to be able to take tech that, in general, has so much [misuse] and be able to use it in a particularly safe and constructive way,” Fichter says. “And hopefully, in a way other organizations in public health and suicide prevention can build on with our help.”

advertisement

About the author

Mark Wilson is a senior writer at Fast Company who has written about design, technology, and culture for almost 15 years. His work has appeared at Gizmodo, Kotaku, PopMech, PopSci, Esquire, American Photo and Lucky Peach

More