How can you help someone in crisis feel better? It requires deeply human intelligence and communications skills, and above all, listening. It is also one of a growing number of situations where new ways of listening—supported by artificial intelligence—can help.
Using text messages exchanged through Crisis Text Line, researchers from Stanford University led by Jure Leskovec used natural language processing, a form of artificial intelligence, to identify counseling strategies that were most helpful to people struggling with issues like eating disorders, anxiety, and suicidal thoughts. Their research found, for example, that being adaptable and creative in a conversation with a person in crisis is key, as is shifting the perspective to focus on other people and on the future.
More than 83 million messages have been exchanged through Crisis Text Line since 2013, when it became the first text-message-based national crisis hotline. When someone texts Crisis Text Line, they receive a message back from a volunteer crisis counselor, and as the conversation unfolds, the counselor deploys research-backed approaches to de-escalate the crisis. Twenty-seven times a day, on average, Crisis Text Line initiates an active rescue by local emergency personnel for someone thought to be in immediate danger.
Through multiple feedback loops, Crisis Text Line’s technology learns from and informs a network of humans—those who text in for help, crisis counselors, and platform engineers. With a core ethic of using data to help people, Crisis Text Line is a pioneer in a new frontier of mission-driven, human-centered artificial intelligence.
As the philanthropic sector addresses complex, entrenched problems, new techniques for data analysis are increasingly useful for listening to beneficiaries and responding to their needs. Many of these techniques fall under the umbrella of artificial intelligence—they seek to find patterns across many data points, to learn from complex sources like human language, and to find meaning amid haystacks of information. Already, nonprofit organizations are using AI approaches to turn multiple sources of community feedback into comparable data points, to solicit information about people’s interactions with police via chatbots, and to prioritize calls for rescue efforts on social media after natural disasters.
Philanthropy’s emerging “feedback movement” is about improving nonprofits’ delivery of critical services—and it addresses a broader level of change as well. With leadership from the Fund for Shared Insight, created by funders including the Ford, Hewlett, and Rita Allen foundations, this area of collaborative effort seeks to equip funders and nonprofits to be more responsive partners, helping to foster positive impact for people in ways they themselves help to define.
In some ways, we are at a similar juncture with artificial intelligence now as we were with social media ten years ago, when nonprofits were just beginning to explore how different platforms could be best used to engage with their particular communities. Reaching the positive potential of these tools requires an inclusive process of questioning—including examining how data is collected, how it is used, why we are using it, and for whom. Navigating these questions successfully requires collaboration among nonprofits, researchers, and funders—and, most importantly, with the people we seek to benefit. Crisis Text Line provides a helpful framework for considering future efforts.
At the Beginning, Purposefully Include Feedback
Nancy Lublin knew that teens in crisis weren’t getting the help they needed, and that technology could help. As the CEO of DoSomething.org, which organizes young people to contribute to campaigns for social good, she was struck by the responses they would sometimes get back from teens, via text message, about unrelated issues they were struggling with in their lives: bullying, addiction, self-harm, abuse.
“We realized we had to stop triaging this and we had to build a crisis text line for these people in pain,” Lublin recalls in a TED Talk.
As of 2015, 88 percent of teens had access to a cell phone, and they typically sent and received 30 texts a day. Studies suggest only two percent of high schoolers use phone crisis hotlines. Meanwhile, suicide rates have been on the rise across America, and suicide is now the second leading cause of death for people between the ages of 10 and 34. Lublin created Crisis Text Line to bridge a gap in our national mental health infrastructure, and it was inundated with responses.
From the beginning, Crisis Text Line was designed with technology at the table. Lublin’s first hires were a Chief Technology Officer and a Chief Data Scientist, Bob Filbin. These initial decisions were core to Crisis Text Line’s success. Complex data processing allows important elements of millions of conversations to help guide the system into greater effectiveness.
“We believed data in itself from Crisis Text Line could change and save lives, which is why we built from the ground up with data and technology,” says Filbin. He describes data as the “exhaust” of the system—always collected, and always feeding back into the system to create improvements.
For example, one approach the crisis counselors use is to reflect back to texters their strengths, as a way of helping them feel hopeful and ready to work on solutions to a problem. Through data analysis and feedback, Crisis Text Line identified three affirmations that have the greatest impact: pointing out that someone is brave, proud, or smart is most likely to shift the conversation in a positive direction.
After a Crisis Text Line conversation has concluded, the texters receive a follow-up question: “Was this conversation helpful?” with a yes/no choice. The crisis counselors complete a report on the conversation, and the text messages themselves are saved, and personally identifying information is scrubbed. The Crisis Text Line team also has in-depth conversations with crisis counselors to understand the experiences behind the results they see. In combination, the text messages, surveys, reports, and interviews provide a valuable source of information about what works and what doesn’t, allowing for continual improvement. Now, 86 percent of those who text in report feeling better after the conversation.
Ideally, a purposeful, inclusive feedback practice would be created from the beginning of any effort focused on social change, allowing an organization to regularly check assumptions and more closely align its work with the needs and routines of the people it is meant to reach. Social-service organizations of any kind can utilize resources to collect actionable feedback regardless of whether they built feedback systems into their operations from the beginning or are just getting started. Feedback Labs serves as a valuable hub for those seeking to increase meaningful feedback data and apply it to their work.
Sign up for our free newsletters
Subscribe to NPQ's newsletters to have our top stories delivered directly to your inbox.
By signing up, you agree to our privacy policy and terms of use, and to receive messages from NPQ and our partners.
One particularly promising tool is Listen for Good, a concise survey developed by the Fund for Shared Insight that allows nonprofits to quickly assess satisfaction among the people they serve and target opportunities for improvement. More than 150 organizations have now used Listen for Good, and there are plans for a public version in 2020. It is also a growing source of data about beneficiary experience across the social sector, with machine learning one of the analytical tools being deployed to understand the results at a larger scale.
Focusing on the Human Benefit of Machine Learning
Along with the promise of artificial intelligence comes serious concerns, including the potential for loss of privacy, targeted manipulation, and institutionalized discrimination. Used poorly, there is no doubt that artificial intelligence can serve to automate bias and disconnection, rather than supporting community resiliency. For the social sector, a values-driven, human-centered, inclusive process of development can help to mitigate the ethical risks of developing artificial intelligence.
Crisis Text Line has an explicit “data philosophy,” which is rooted in the goal of using “data to improve outcomes for people in crisis.”
This focus on outcomes for the people they serve has helped decide a number of questions about product design. One early insight from Crisis Text Line’s data was that three percent of texters were using 34 percent of crisis counselors’ time. These repeat texters were using the service as a replacement for longer-term therapy—directing resources away from people truly in crisis. Prioritizing the goal of impacting the lives of people in crisis, while respecting everyone’s experience, and using technology as a tool, the crisis counselors’ interface now flags these high-use cases as “chronic” and displays an individual action plan for how to quickly and gently direct them elsewhere—e.g., to call a therapist they can see in person. By examining feedback afterward, Crisis Text Line found that these chronic users now rate their experience on the platform even higher, as their long-term needs are now being addressed.
After beginning with the needs of the people who use the service, Crisis Text Line also seeks to have an impact on the level of communities and systems.
“In sharing data, our first goal is protecting the privacy and well-being of our users,” says Filbin. “Given that, can we make this data available to create more good in the world?”
To navigate this question, Crisis Text Line engages a diverse group of advisors, and it works with more than 200 nonprofit and institutional partners. With support from the Robert Wood Johnson Foundation, Crisis Text Line also assembled a Data Ethics Committee, which spent a year developing guidelines on how to manage research on its dataset responsibly. As a result, Crisis Text Line is now working with a handful of carefully selected research teams.
There are a number of organizations developing ethical guidance on the use of data and AI. Among them is Data & Society, whose founder danah boyd (a Crisis Text Line board member) points to the larger questions about social responsibility and resource allocation that arise when new patterns are uncovered by artificial intelligence. Suggesting an area ripe for work by social-change organizations, policy-makers, and funders, boyd writes, “Capitalizing on the benefits of technology will require serious investment and a deep commitment to improving the quality of social services.” If we seek feedback, we should be ready to respond to it, on every level.
Listen, Pivot, Iterate, Improve
When it comes to dealing with people in crisis, greater efficiency can be a matter of life or death. By embedding feedback, data collection, and machine learning in their process, Crisis Text Line continually adjusts to meet the goals of serving people more efficiently and effectively.
Using an AI-based system that works more quickly and accurately than human screeners and catches non-intuitive patterns, Crisis Text Line identifies which messages represent imminent danger. For example, a text containing the word “ibuprofen” is 16 times more likely to be high risk than one containing the word “suicide.” The crying-face emoji is 10 times more indicative of risk than “suicide.” The system, which uses a set of pattern-seeking algorithms known as deep neural networks, enables Crisis Text Line to connect 94 percent of people with the highest risk to a crisis counselor in under five minutes.
As an overall approach, Crisis Text Line is ready to try new things, measure their success, and iterate further or change course based on what they hear. Throughout, their process is rooted in a sense of responsibility to use data to its fullest potential.
“Now that we are in the digital era, we should be taking advantage of it,” says Filbin. “We should be able to learn from the experiences we have. We need to be always learning how to have a better next conversation.”
As a resource, Filbin points to DataKind, an organization founded by Jake Porway, a former New York Times data scientist, to facilitate collaboration between data scientists and social sector organizations. In the search to translate the potential of data analysis into tangible progress, DataKind has developed cross-sector, mission-driven collaborations—this process is laid out for others to learn from in the DataKind Blueprint.
Artificial intelligence has the power to make voices visible as points of evidence—allowing us to see needs and opportunities we may have missed, and to quickly adjust approaches to better serve those we seek to help. As artificial intelligence increasingly influences critical functions like healthcare, the criminal justice system, and how our cities work, the social sector can stand up for centering attention on diverse communities whose voices have been least heard—but whose knowledge can save lives.