An illustrated robot face with a strained expression against a red and orange background. There are lighting emotes surrounding the robot’s face.
Photo by Jon Tyson on Unsplash

SameSame Collective is a nonprofit that provides direct mental health advice to queer youth in parts of the world where there is limited public acceptance and community support. Founded in 2021, the organization built automated chat agents onto WhatsApp, the popular messaging application owned by Meta.

When ChatGPT and other large language models (LLMs) came on the scene about a year later, Jonathan McKay, one of SameSame’s cofounders, figured it was game over for its comparatively primitive chat service. The organization ran workshops to test how users responded to its original offering versus an AI-powered chatbot.

“I fully went into those workshops expecting that they would say, ‘We much prefer ChatGPT and Meta’s AI,’ and that we would have to conclude we don’t have a reason to exist,” McKay told NPQ.

And yet, quite the opposite occurred.

“While they liked the user experience of ChatGPT and Meta’s AI more, they scored our service higher on trustworthiness and local relevance. And so, we left those workshops thinking, ‘OK, maybe we do deserve to continue to exist.’ But how do we leverage what LLMs make available to us to make our service even better?”

“Making [a bot] more human-like may backfire.”

This is a question all organizations might do well to ask themselves amid the wave of AI products and services sweeping the nonprofit sector. A counterintuitive consensus is emerging from research and real-world experiences within nonprofits: People don’t like AI that tries too hard to be human—a finding that may carry particularly important consequences for organizations whose missions are based on conscientiousness and compassion.

The Research of Chatbots

For all the billions of dollars being invested in making AI ever more personable, a growing body of research suggests that people are increasingly disenchanted with human-like bots.

At George Mason University’s Costello College of Business, researchers built a chatbot for a Minneapolis nonprofit as part of an experiment. One was an “anthropomorphized” version—it had a name and was conversational—and another one was more direct and robotic. The goal of the chat was to steer people toward volunteering or donating. The study found that the more human-like the bot, the less likely people were to stick around and engage.

“In a very personal context, a nonprofit where people actually donate out of free will, or it’s very personal to them, like some cause in society…making [a bot] more human-like may backfire,” Siddharth Bhattacharya, a George Mason professor who cowrote the research’s findings, told NPQ.

Another recent study published in Nature Human Behaviour found that people vastly preferred human expressions of empathy to those generated by AI.

“As sophisticated or advanced as it could get, [AI] is never going to have been in a situation where it’s also lost its job, or it’s not able to put food on the table for its kids.”

“Human-attributed responses were rated as more empathic and supportive, and elicited more positive and fewer negative emotions, than AI-attributed ones,” the study noted. “Moreover, participants’ own uninstructed belief that AI had aided the human-attributed responses reduced perceived empathy and support.”

These findings suggest that recent stories about people falling in love with AI chatbots or being steered toward destructive behavior are outliers. People seem more likely to respond with doubt and discomfort to AI bots that affect human emotions.

AI, Chatbots, and Nonprofits

These findings in many ways confirm what leaders at tech-based nonprofits have been seeing up close as they integrate AI into their services.

Empower Work helps those navigating career challenges by connecting them with counselors. Jaime-Alexis Fowler, the organization’s CEO, first noticed the adverse reaction clients had to the prospect of chatting with bots back in 2018.

“The peer counselor would be like, ‘No, I’m a real person,’” Fowler told NPQ. “They did not want to talk to an AI bot.”

Fowler noted that AI bots have an empathy gap: “As sophisticated or advanced as it could get, [AI] is never going to have been in a situation where it’s also lost its job, or it’s not able to put food on the table for its kids.”

Fowler explained there are three forms of empathy: cognitive, affective, and motivational. “Right now, AI can do the cognitive one,” she said. “But affective empathy and motivational empathy are two areas that are really important when you’re navigating something extremely difficult.”

McKay has spent a lot of time trying to understand why young people—those presumedly most comfortable interacting with digital personas—would be turned off by humanized chatbots.

“I suspect what’s happening with a lot of LLMs is that people are trying to infuse them with some kind of brand or style…. And it’s like, ‘That’s not what I’m here for. I’m here to get some value. And if every message doesn’t give me value, if I have to wade through all of this other fluff, I don’t want to do that. That’s more work for me.’

AI as a Tool, Not a Personality 

For many nonprofits, whether or not to use AI may be a moot point: the technology is too ubiquitous to ignore today. More than half of nonprofits report using it in some capacity, a substantial increase from just a few years ago.

An emerging consensus among nonprofit leaders is that AI use should be accompanied by close human involvement and an ethical framework, particularly when using it to interact with an organization’s beneficiaries.

“Nonprofits should regularly request community feedback on how the user is experiencing the AI and adapt their product to take feedback into account,” Kevin Barenblat, the co-founder and president of the nonprofit tech incubator Fast Forward, said in a statement. To help nonprofits develop sound guidelines around AI use, Fast Forward offers an AI policy builder tool.

In the realms of education and counseling, AI’s potential to help nonprofits expand their reach to underserved populations makes the technology at once singularly promising and fraught. Roughly 65 percent of rural counties in the United States do not have practicing psychiatrists. In low-income countries, less than 10 people per 100,000 have access to mental health treatment. It is estimated that less than 2 percent of US grade-school students receive high-quality personalized tutoring, a shortage that particularly impacts lower-income areas of the country. But replacing human interactions with AI chatbots presents many challenges.

Aly Murray founded UPchieve to help bridge these gaps. Technology has been at the forefront of its efforts to expand tutoring services to lower-income communities, offering a platform that allows students to chat in real time with human tutors.

UPchieve recently gave students the option of having sessions with an AI tutor instead of a human one. It found that just 20 percent opted for the AI tutor, and those who did rarely went back to using it. The fact that its AI tutor’s personality trait was to be “overwhelmingly positive” didn’t help; it may have made it even less popular.

“Students didn’t like that,” Murray told NPQ. “In sessions where the gap in sentiment was large between the student and the tutor—whether it was human or otherwise—students’ average sentiment actually decreased over that session.”

“What’s unique and special about so many nonprofits, especially so many service organizations in the nonprofit space, is that they have this incredible capacity to deeply understand what’s happening with the communities they serve and address that in a deeply human way.”

Instead of using AI as tutors, UPchieve decided to use AI to make human-conducted tutoring sessions more effective and insightful.

“We’re able to summarize what happens in the tutoring sessions for students and teachers. There’s no way teachers would have had time to read through all of the tutoring sessions that their students were having,” Murray said.

Similarly, Empower Work did not abandon its work on AI agents but shifted its focus from clients to counselors. An AI assistant can offer insights and suggestions to counselors during real-time sessions.

“Now the AI assistant just seamlessly surfaces those based on the conversation, and the peer counselor can use their discretion [on whether to use them or not],” Fowler said.

AI tends to stir intense feelings in the nonprofit sector, with some viewing it as a panacea and others as a scourge to be avoided at all costs. The experience of tech-forward nonprofits suggests a middle ground: using AI as a tool that enhances their missions, while being wary of how AI interacts with their service communities and the wider public. This approach, after all, plays to the strengths of the nonprofit sector.

“I think what’s unique and special about so many nonprofits, especially so many service organizations in the nonprofit space, is that they have this incredible capacity to deeply understand what’s happening with the communities they serve and address that in a deeply human way,” Fowler said.