
For decades, nonprofits have been asked to address increasingly complex social problems while operating with chronic underinvestment in infrastructure, staffing, and administrative capacity. New technologies are routinely introduced as solutions to this imbalance. Most promise transformation. Few meaningfully change the conditions under which nonprofit work is done.
Artificial intelligence marks a different kind of moment because it alters the economics of capacity itself. AI is already being adopted across the nonprofit sector, and we now face the urgent question of whether nonprofits themselves will shape how AI is governed, funded, and constrained, or whether those decisions will be made primarily by vendors, markets, and states, with limited nonprofit influence.
Much of my career was spent examining nonprofit governance and power, and like many longtime observers, I approached AI with skepticism shaped by decades of watching “transformative” tools cycle through the sector. Sustained engagement over the past year with technical, policy, and governance debates has shifted my view. AI represents a structural shift not only because it promises efficiency, but because it may either redistribute—or further concentrate—analytic and administrative capacity in a sector already stretched thin.
So will nonprofits help govern that shift while shared frameworks are still being negotiated or will they encounter AI as yet another unfunded mandate imposed from outside?
The Governance Gap
Recent research underscores how unprepared the sector is for this shift. While most nonprofits now report using AI in at least one context, only a small minority have formal policies governing its use. This is not a minor compliance lapse. It reflects a broader failure to treat AI as a governance issue rather than a collection of tools.
Some large international and humanitarian organizations are developing internal AI principles grounded in accountability, human rights, and mission alignment. These efforts acknowledge a basic reality: AI systems are not neutral. They encode values, shape incentives, and redistribute power by default. Smaller organizations, however, often lack the capacity to develop similar frameworks, even as they are increasingly required to interact with AI-enabled funders, vendors, and government agencies.
And organizational policies, while necessary, are not sufficient. The larger risk is that nonprofits remain largely absent from the public and policy debates shaping how AI is regulated and deployed at scale. When that happens, efficiency and control tend to define the rules. Nonprofit values—equity, community accountability, human judgment—are incorporated, if at all, after decisions are already made.
Federal–State Conflict and Operational Risk
This governance gap is becoming more consequential as federal and state approaches to AI regulation diverge. Federal efforts to preempt state-level AI laws have introduced uncertainty for nonprofits operating across jurisdictions, even as states continue to experiment with protections related to algorithmic discrimination, transparency, and government use.
The issue is structural: who sets priorities, who defines success, and who is accountable when tools fail or cause harm.
For nonprofits that rely on public funding or operate under government contracts, this conflict is not abstract. It affects eligibility, procurement requirements, compliance obligations, and liability exposure. Uniform national standards may reduce complexity, but they also tend to reflect the priorities of the most powerful actors involved in their design. Nonprofits—particularly community-based organizations—are rarely among them.
Philanthropy, AI, and Power
Philanthropic investment in AI tools for frontline workers presents both opportunity and risk. Large initiatives promise to reduce administrative burden and support professionals managing overwhelming caseloads. These goals align with long-standing sector needs. The governance structures behind these efforts, however, are often unsettled or opaque.
Sign up for our free newsletters
Subscribe to NPQ's newsletters to have our top stories delivered directly to your inbox.
By signing up, you agree to our privacy policy and terms of use, and to receive messages from NPQ and our partners.
When initiatives blend grantmaking with equity investments and revenue-based financing, market incentives inevitably influence what gets built, for whom, and under what constraints. Consultation with frontline workers is frequently emphasized, but consultation is not the same as shared decision-making. Without governance mechanisms that meaningfully distribute power, these efforts risk reinforcing existing hierarchies rather than mitigating them.
When AI is imposed through funding or regulatory mandates, it risks becoming another layer of surveillance and control.
This is not primarily a question of intent. Many funders act in good faith. The issue is structural: who sets priorities, who defines success, and who is accountable when tools fail or cause harm.
When AI Is Imposed Rather Than Chosen
Used thoughtfully, AI could help nonprofits reclaim time and analytic capacity now consumed by compliance, reporting, and fragmented systems. It could strengthen the ability of smaller organizations to participate in policy development and narrative formation.
When AI is imposed through funding or regulatory mandates, it risks becoming another layer of surveillance and control. Failures of automated eligibility and benefits systems have already shown how errors in algorithmic decision-making disproportionately harm people with the least ability to contest outcomes. Nonprofits are often left to manage these harms without authority over the systems that caused them.
As government agencies accelerate AI adoption through centralized vendor relationships, accountability becomes diffuse. When problems arise, responsibility is difficult to trace. Vendors point to implementation choices. Agencies point to tools. Policymakers point to efficiency goals. Communities bear the consequences.
What Nonprofits Can Do Now
The window for shaping AI governance is narrowing, but it remains open. Nonprofits can take several practical steps that treat AI not as a technical upgrade, but as a governance responsibility:
- Develop clear AI use policies through processes that include staff and community perspectives, not only board oversight. Templates can help, but the process matters as much as the document. Policy development should surface tradeoffs, risks, and red lines, not just permissible uses. Governance practices should reflect the transparency and accountability nonprofits expect from funders, vendors, and public agencies.
- Establish shared principles for engaging with AI-enabled government and philanthropic systems. These should include requirements for meaningful human review, clear appeal mechanisms, and the ability to contest or explain automated decisions that affect clients. Where possible, organizations should push for these principles to be embedded in contracts, memoranda of understanding, and procurement processes. AI governance can often be integrated into existing data privacy, ethics, and client rights frameworks rather than treated as a separate domain.
- Build internal capacity to identify and document patterns of harm early—such as unexplained denials, delays, or demographic disparities linked to automated systems. Frontline staff are often the first to see these failures, but they need pathways to escalate concerns beyond individual casework. Coordinating responses with legal aid organizations, advocacy groups, and oversight bodies can shift understanding of these issues from isolated problems to systemic ones.
- Recognize that governance also includes the choice not to adopt. Declining to implement AI tools that undermine mission alignment, client trust, or accountability is itself a form of institutional agency. Documenting these decisions and sharing them within networks and associations helps challenge the assumption that AI adoption is inevitable and creates space for collective standards to emerge.
The window for shaping AI governance is narrowing, but it remains open. Nonprofits can take several practical steps that treat AI not as a technical upgrade, but as a governance responsibility.
The Choice Ahead
The nonprofit sector has experienced firsthand how technological and policy solutions developed without meaningful community input tend to reproduce the power relationships they claim to disrupt. AI is no exception.
With clear governance and accountability, AI could help address long-standing capacity constraints. Without them, it risks becoming another extraction mechanism—concentrating power while dispersing risk. The choice remains available, but not indefinitely.
