Fast company logo
|
advertisement

TECH

How the Trevor Project is using AI to prevent LGBTQ suicides

Over the past three years, the nation’s largest suicide prevention organization for LGBTQ youth has undergone a major tech overhaul, most recently using machine learning to assess high-risk outreach.

How the Trevor Project is using AI to prevent LGBTQ suicides

[Photo: courtesy of the Trevor Project]

BY KC Ifeanyi5 minute read

In 2017, when John Callery joined the Trevor Project, an LGBTQ suicide prevention organization, as its director of technology, he had a galvanizing, if not daunting, mandate from the newly appointed CEO, Amit Paley: “Rethink everything.”

“I think my computer had tape on it when I started on the first day,” says Callery, who’s now the Trevor Project’s VP of technology. “In a lot of nonprofits, the investments are not made in technology. The focus is on the programmatic areas, not on the tech as a way of driving programmatic innovation.”

The Trevor Project was founded in 1998 as a 24-hour hotline for at-risk LGBTQ youth. As beneficial as talking to a counselor on the phone can be, advancements across tech began to make the Trevor Project’s efforts seem not only dated, but inadequate to meet demand.

[Photo: courtesy of the Trevor Project]
According to a recent study by the Trevor Project, more than 1.8 million LGBTQ youth in the United States seriously consider suicide each year. It’s a grim statistic that’s been exacerbated by the current administration.

“The day after the presidential election in 2016, our call volume at the Trevor Project more than doubled in a 24-hour time,” says Paley, a McKinsey & Company alum who worked as a volunteer counselor for the Trevor Project before becoming CEO in 2017. “It was just heartbreaking to hear from young people who really were not sure if there was a place for them.”

John Callery [Photo: courtesy of the Trevor Project]
Paley recognized how the Trevor Project’s technological shortcomings were underserving LGBTQ youth, and, with Callery, he has prioritized more forward-thinking solutions over the past three years, including expanding to 24/7 text and chat services and launching TrevorSpace, an international LGBTQ social network.

On the flip side of those solutions, though, was the issue of how better to manage the needs of people reaching out to the Trevor Project through these new outlets. “When youth in crisis reach out to us via chat and text, they’re often connected to a counselor in five minutes or less,” Callery says. “We wanted to find a way to connect to LGBTQ youth at highest risk of suicide to counselors as quickly as possible, sometimes when every minute counts.”

Continuing to operate under Paley’s prompt to “rethink everything,” Callery led the efforts to submit the Trevor Project for Google’s AI Impact Challenge, an open call to organizations that could use AI to have a better impact on societal change. More than 2,600 organizations applied, and the Trevor Project was one of 20 selected, receiving a $1.5 million grant to incorporate machine learning and natural language processing into its services.

Leveraging AI in suicide prevention

Leveraging AI in suicide prevention has gained traction over the years. Data scientists at Vanderbilt University Medical Center created a machine-learning algorithm the uses hospital admissions data to predict suicide risk in patients. Facebook rolled out AI tools that assess text and video posts, dispatching first responders in dire situations that require intervention.

For the Trevor Project, someone reaching out via text or chat is met with a few basic questions such as “How upset are you?” or “Do you have thoughts of suicide?” From there, Google’s natural language processing model ALBERT gauges responses, and those considered at a high risk for self-harm are prioritized in the queue to speak with a human counselor.

“We believe in technology enabling our work, but it does not replace our work,” Paley says. “That person-to-person connection for people in crisis is critical. It’s the core of what we do. The way that we are using technology is to help facilitate that.”

[Photo: courtesy of the Trevor Project]
To that end, Callery was aware of how it might seem off-putting for someone in crisis reaching out for help only to be met with a chatbot. Using survey data, Callery’s team found that with the online chat service, the AI-generated questions just felt like filling out an intake form before speaking to a counselor.

“But on TrevorTexts, we did need to really differentiate the bot experience and the human experience,” Callery notes.

advertisement

To do that, he worked with Google Fellows specializing in UX research and design to better craft the AI’s messaging so that it better indicates when someone reaching out is being answered by automated questions and when they’ll begin speaking to an actual crisis counselor.

Before working with Google, that seemingly small communication bridge didn’t exist, but it’s proven to be effective.

“If we didn’t take the time and attention to do a lot of that user research, we would have had a bunch of assumptions and probably mistakes,” Callery says. “That could have been a turnoff for young people reaching out to our service.”

Avoiding bias

Another AI blindspot the Trevor Project aimed to avoid: algorithmic biases.

It’s been well documented how gender and racial biases can creep into AI-based functions. Being somewhat late to the AI game has given the Trevor Project the advantage of learning from the past mistakes of other companies and organizations that didn’t factor in those biases at the outset.

“Sitting at the intersection of social impact, bleeding-edge technology, and ethics, we at Trevor recognize the responsibility to address systemic challenges to ensure the fair and beneficial use of AI,” Callery says. “We have a set of principles that define our fundamental value system for developing technology within the communities that exist.”

Working with the Trevor Project’s in-house research team, Callery and his tech organization identified a number of groups across intersections of race and ethnicity, gender identity, sexual orientation, and other marginalized identities that could fall victim to AI biases as it may pertain to differences in language and vernacular.

“Right now, we have a lot of great data that shows that our model is treating people across these groups fairly, and we have regular mechanisms for checking that on a weekly basis to see if there are any anomalies,” Callery says.

[Photo: courtesy of the Trevor Project]
On the other side of crisis outreach, the Trevor Project is also using AI to better train its counselors with message simulations. The Trevor Project’s use of AI coupled with other initiatives, including a new volunteer management system and a revamped digital asynchronous training model, has more than tripled the number of youth served since 2017.

By the end of 2023, the organization aims to serve the 1.8 million LGBTQ youths seriously considering suicide.

“A couple of years ago when Amit started, he wanted us to really think about two core pillars to growth,” Callery says. “We needed to drive down what we call our costs-per-youth-served by 50%. That means that we can help two times the number of young people with the same amount of funding that we have. And the second pillar is that we’ll never sacrifice quality for scale. We’ll always maintain or improve the quality that we provide to youth in crisis.”

Recognize your company's culture of innovation by applying to this year's Best Workplaces for Innovators Awards before the final deadline, April 5.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

Explore Topics