Colorful lines of code projected over a woman’s face, symbolizing the human responsibility in the usage of AI.
Image Credit: ThisIsEngineering on Pexels

Artificial intelligence (AI) has come to the nonprofit sector. Whether one views the technology as a force for good or something more sinister, the use of AI by nonprofits is sure to grow. More than half of nonprofits are using AI in some capacity, according to a recent report by the Center for Effective Philanthropy. Yet less than 10 percent of nonprofits have any kind of policy governing the use of AI.

This is not just a casual oversight. The lack of rules and guardrails in place can expose nonprofits—and the vulnerable communities they may serve—to some of the unsavory aspects of AI, like algorithmic bias and privacy violations, not to mention legal risk. These hazards may be heightened for nonprofits that can’t afford to build customized AI systems and instead rely on off-the-shelf AI platforms like ChatGPT.

The use of AI by nonprofits is sure to grow.

By crafting AI policies, nonprofits can not only avoid such risks, but they can leverage the technology in ethically and socially conscious ways that enhance their missions.

“AI is, at its core, changing how we interact with the world and how work is being done,” said Addie Achan, the director of AI programs at Fast Forward, an organization that supports tech nonprofits. “So, employees are likely to adopt it in some context, and it’s better…for an organization to define the rules and expectations around that use rather than have people use it and inadvertently cause more harm—or just completely ban it and leave a big opportunity to gain efficiency.”

Fast Forward develops technology specifically for nonprofits and runs an incubator program. Long before ChatGPT made AI an omnipresent buzzword, Fast Forward offered guidance to nonprofits in deploying the technology.

A nonprofit itself, Fast Forward recently took on the task of establishing its own AI policy. Recognizing that other nonprofits may be in the same boat, the organization decided to develop a tool to help nonprofits draft their own AI policies, the Nonprofit AI Policy Builder. It’s completely free, and—walking the talk—Fast Forward built it on top of an AI large language model (LLM), making it capable of engaging in conversations and generating highly customized policies.

Generating Policies—And Dialogue—Around AI

Anyone who’s used ChatGPT or other similar tools will find the policy builder’s interface familiar. The chatbot begins by asking the user about the basics: “Share whatever you’re comfortable with, but especially your organization’s name and mission.”

The organization decided to develop a tool to help nonprofits draft their own AI policies.

The chatbot offers users a light, standard, or advanced option in crafting AI policies. It will then guide the user through five steps, with the responses—and the ultimate policy it generates—guided by an organization’s specific plans. Some organizations might intend to use AI to “interact with beneficiaries,” as Achan put it, or they might just want to use it for internal purposes.

At any point along the way, the user can ask about terminology or concepts they might not understand. Nonprofit teams do not need to have ready answers for some of the questions; generating discussion and debate is part of the idea.

“I think it’s a helpful tool for organizations to get started in part because it provides enough policy, but also because I think it fosters a conversation that organizations can have around how to engage around the technology,” said Kevin Barenblat, Fast Forward’s cofounder and president.

“The social sector always eats last at the table when it comes to technology and innovation.”

A Wider View of AI’s Potential

Nonprofits are currently using AI predominantly for accounting and other financial tasks, according to a recent survey by BDO. Establishing an AI policy, however, can give organizations a wider view of the technology’s potential, beyond administrative functions and drafting emails and summarizing reports. A new generation of nonprofits are putting AI at the center of their missions.

Climate Trace, for example, uses real-time satellite data to provide highly granular information on emissions, even to the point of pinpointing a specific source of pollution. Another AI-powered nonprofit, or APN, is Digital Green, which uses AI to help farmers around the world, especially in developing regions, to get real-time advice. For example, a farmer could send a photo of an insect to determine whether it poses a threat to crops and, if so, receive suggestions for dealing with the pest.

By first taking care to address potential ethical and legal hazards, nonprofits can use AI to maximize social benefits—a strong counterpoint to its prevailing application in the corporate sector of supercharging efficiency and profits.

“I feel like the social sector always eats last at the table when it comes to technology and innovation,” Barenblat said. “So, we are hopeful that as many organizations as possible will find the right ways to use these tools. It’d be great if, as these tools develop, they can also help the people who need it most.”