
In 1974, economist and metalworker Harry Braverman wrote Labor and Monopoly Capital, which showed how technology under capitalism shifts knowledge from workers to management—not because automation demands it but because control-seeking managers and capitalists do. Just over a half century later, his insight remains urgent: An invention offers options, but power often determines which are pursued.
For nonprofits facing the rise of artificial intelligence (AI), this warning carries weight. Large language models (LLMs) that could amplify community wisdom and help workers reclaim integrated mission-driven practice are instead often extracting experiential knowledge from frontline staff, centralizing decision-making in management-controlled systems, and replacing contextual judgment with standardized processing.
The question isn’t whether AI will transform nonprofit work; it’s whether workers and communities can ensure that the new technology serves mission and relationships rather than efficiency metrics that undermine the very trust and judgment that make the pursuit of social justice possible.
The Knowledge Transfer Problem
A key figure in Braverman’s book was Frederick Winslow Taylor, a mechanical engineer and founder of scientific management in the early 1900s. Taylor sought to control labor, down to prescribing each worker’s movements, in the name of efficiency. Beyond that, he called for employers to collect craft knowledge from workers and shift it to management. Workers’ jobs were divided into decontextualized, simplified tasks, while management hoarded technical know-how once held by skilled tradespeople.
Today, when nonprofits implement AI without protecting workers’ judgment and autonomy, they facilitate a similar transfer of power. The tacit understanding of experienced staff—knowing which families need outreach, when silence signals distrust, and which community leaders bridge cultural gaps—is extracted into databases and algorithms. Staff become processors of AI-generated recommendations rather than strategic decision-makers drawing on community relationships and expertise.
This has real consequences. Research from the Harvard Business School in 2023 showed that while consultants using AI completed tasks 25 percent faster with 40 percent higher quality, the gains mainly benefited below-average performers. AI can help less experienced staff achieve competency more quickly, but only when guided by those who understand the community, mission, and organizational culture. Without that guidance, those “using AI were 19 percentage points less likely to produce correct solutions.” (Italics added.)
The Efficiency Trap
As people in the nonprofit sector know, for many years funders have been increasingly demanding quantifiable efficiency metrics (such as the number of clients served or cases closed) from service providers. This trend mirrors early Taylorist principles, where “a fair day’s work” was defined in output units and the labor process itself was controlled by management.
The “AI efficiency trap” plays out in familiar ways: Time savings often lead not to relief, but to higher expectations. Workers may feel more productive yet overwhelmed, as efficiency gains are absorbed into rising demands instead of reducing workloads. In nonprofits, if AI is used solely to expedite routine tasks, it can exacerbate burnout and diminish time for relationship building or advocacy—the work that drives lasting change.
A case manager using AI for intake summaries might complete assessments more quickly, but will they have time to fully listen to clients, or just handle more cases with less depth? When productivity is measured by speed over impact, organizations risk judgment displacement—undervaluing quality for the sake of speed.
The Craft Knowledge of Community Work
Understanding what AI cannot replace clarifies where workers must maintain control. The most valuable intelligence in nonprofit work involves, for instance, recognizing when housing crises mirror previous downturns, identifying how policy changes in one jurisdiction affect advocacy elsewhere, and understanding how trauma manifests across cultural contexts.
An experienced practitioner can identify warning signs when funder partnerships sour, early indicators that coalitions fracture, and subtle shifts suggesting policy opportunities. They can work beyond a word-for-word translation to understand how questions may be perceived in a different language, what information families withhold, or how to build trust. This tacit knowledge gained from experience, judgment, and intuition is more valuable than sophisticated tech in situations that require nuance, creative thinking, and empathy.
This craft knowledge—understanding community dynamics, recognizing when patterns shift, calibrating trust-building approaches—cannot be fully codified without losing what makes it valuable.
For nonprofits serving marginalized communities, relational knowledge forms an organizational infrastructure built through years of presence, follow-through, and earned trust. AI can maintain records and flag patterns, but it cannot replicate the lived experience of being trusted by communities that have legitimate reasons to distrust institutions.
Reclaiming Strategic Control
How might AI be used strategically? Here are some key skills that if employed wisely in organizations can help ensure that AI serves nonprofits rather than the other way around:
- Mission-Aligned Sensemaking
Use AI to process research and policy updates while applying nonprofit expertise to interpret significance for specific populations served.
- Values-Based Filtering
Employ AI’s processing power to do tasks that are amenable to automation like donor segmentation, while applying human judgment to ensure resource allocation, and relationship-building align with organizational mission and community priorities
The most critical leadership choice here is whether to use AI for worker empowerment or knowledge extraction.
- Community Context Integration
Pair AI outputs with organizational knowledge and cultural understanding to craft strategies.
- Equity-Centered Quality Assurance
Use human experience to evaluate AI outputs through an equity lens to ensure that the results do not perpetuate biases or harm marginalized groups.
These skills reflect Braverman’s view of workers becoming “masters of the technology of their process,” thereby retaining strategic control so that technology serves rather than undermines the mission.
Implementation Frameworks in Leadership
The most critical leadership choice here is whether to use AI for worker empowerment or knowledge extraction. This means resisting funder pressure to measure productivity by task-completion rates and instead evaluating whether AI enhances workers’ strategic capacity and community relationships.
Additional steps leaders can take are:
- Invest in Bias-Aware AI Governance
Embed community voices and worker expertise into technology decisions. Ensure staff serving marginalized communities have formal roles in selecting, implementing, and evaluating AI tools. Develop mechanisms that enable workers to override AI recommendations when their judgment suggests potential harm to the community. Measure override frequency as an indicator of whether the labor process remains under worker control.
Sign up for our free newsletters
Subscribe to NPQ's newsletters to have our top stories delivered directly to your inbox.
By signing up, you agree to our privacy policy and terms of use, and to receive messages from NPQ and our partners.
- Position Experienced Staff as Strategy Guides
Leverage senior staff’s ability to balance priorities and make evidence-based decisions about community needs. Frame their role as determining when and how AI adds value, not just learning to execute AI-defined procedures faster.
- Develop New Productivity Measurements
Focus on mission multiplication—how effectively experienced professionals use AI to scale community knowledge and advocacy impact.
Steps for Frontline Workers
Craft knowledge represents the irreplaceable center of mission-driven work. In considering AI adoption, rather than focusing on tool mastery, organizations should decide which mission challenges warrant AI assistance at all.
Other ways that nonprofit workers should respond to AI include:
- Trust Quality Control Instincts
When AI recommendations ignore cultural context, overlook marginalized populations, or prioritize efficiency over relationship building, the organization’s expertise should take precedence over the algorithm.
- Be Strategic About Knowledge Sharing
Some community knowledge can be documented to help less experienced colleagues. Other knowledge protects against worker replacement. Documenting contextual judgment creates organizational learning while maintaining collective worker authority over knowledge application.
- Demand Decision-Making Authority
AI recommendations must be evaluated before being executed. Otherwise, the agency becomes a processor of algorithmic decisions rather than a strategic decision-maker, exactly the kind of knowledge transfer Braverman warned against.
Integration for Teams
Younger staff who are fluent in AI tools can help set up systems. Experienced directors can contribute contextual judgment about mission alignment, help identify partnerships that risk mission drift, and know when to prioritize relationship cultivation over proposal volume.
For nonprofits to employ AI in ways that enhance, rather than replace, human capabilities, sectorwide solidarity and sharing are required.
Together, they can develop strategies that combine technical expertise with strategic insight. This prevents both dismissing valuable technological capabilities while also ensuring that tools are not implemented without considering organizational culture or community context. Crucially, it maintains collective worker control over the labor process rather than ceding authority to management-controlled AI systems.
And instead of breaking community work into disconnected tasks, organizations can use AI to help workers reclaim integrated practice where research, relationship-building, advocacy, and service delivery flow from a unified strategic understanding. The potential exists, Mark Allison noted in Jacobin, to reunite “in automated form, many of the skills and bodies of knowledge that the capitalist division of labor has pulverized.”
How Nonprofits Can Use AI to Build Collective Power
For nonprofits to employ AI in ways that enhance, rather than replace, human capabilities, sectorwide solidarity and sharing are required. Specific steps may include the following:
- Adopt Bias-Aware AI Governance Standards
Organizations need to share lessons about when AI tools have perpetuated inequities and how they caught and corrected these patterns.
- Resist Funder Pressure
Collectively, nonprofits can educate funders on why speed-based productivity metrics overlook the value they create through strategic judgment and community relationships.
- Create Peer Learning Networks
Nonprofits can share knowledge about AI tools that genuinely enhance mission-driven work versus those that create efficiency without impact. It is also important to develop solidarity around maintaining worker decision-making authority.
- Build Coalitions with Labor
The challenges nonprofits face echo those in other sectors where AI threatens to displace workers and transfer craft knowledge to management. Learning from broader labor strategies can strengthen nonprofit resistance.
If we want AI to improve rather than degrade mission-driven work, we must carry the battle into the labor process itself.
Safeguarding Human Judgment and Community Impact
Braverman taught that technology doesn’t determine outcomes—power relations do. Under capitalism, the drive “to enlarge and perfect machinery…and to diminish the worker” treats automation as an opportunity to transfer knowledge from labor to management. For nonprofits, this manifests as pressure for efficiency metrics and “scalable” solutions that extract craft knowledge into management-controlled systems.
But Braverman also detected technology’s capacity to reunite fragmented labor processes on a higher plane. For nonprofits, AI could help teams reclaim unity of mission-driven practice—integrating research, strategy, relationships, and advocacy—rather than fragmenting them into disconnected tasks.
Securing this future requires fighting for it. If we want AI to improve rather than degrade mission-driven work, we must carry the battle into the labor process itself: using technology to enhance worker judgment and community relationships while ensuring communities maintain their voice, dignity, and agency.
The goal is not just better AI practices, but a model for how mission-driven work can resist the Taylorist impulse, preserving the integration of heart, mind, and community that makes social change possible.
