The latest mea culpa from the self-appointed reformers among us unwashed masses of nonprofits comes from Ken Berger, previously of Charity Navigator, writing with Caroline Fiennes. In their article in Alliance Magazine, they admit that the nonprofit “impact revolution” may perhaps have been…misdirected, let’s say.
I admit to having had a longstanding argument with Berger on why he would continue to say from his bully pulpit at Charity Navigator that donors were increasingly clamoring for information proving impact even when the research (that’s right – the research) in no way supported that, so maybe my perception of this confession is colored by that. Regarding what the research does say about “What Donors Want,” please see this article by Cynthia Gibson and William Dietel.
Berger and Fiennes write:
The revolution was based on the premise that it would be a great idea to identify the good ones and get people to fund or implement those at the expense of the weaker ones. In other words, we would create a more rational nonprofit sector in which funds are allocated based on impact. But the “whole impact thing” went wrong because we asked the nonprofits themselves to assess their own impact.
Is that why it all went wrong?
They claim there were two major problems with asking nonprofits to measure their own impact. First, nonprofits may get nervous about exposing negative findings and do marketing instead to ensure their own funding. (The authors admit they themselves have done so, and we believe them.) Second, those deep longitudinal studies can be pricy to undertake. What’s never said straight out: “We realize our prescriptions for this hardworking sector would have ended up favoring the well-capitalized and that there would be a necessary winnowing in all of that—a marginalization of the less well funded, a starving of the ordinary heroes that inhabit every community and the anointing of stars.”
So, according to them, what research should nonprofits do?
First, nonprofits should talk to their intended beneficiaries about what they need, what they’re getting, and how it can be improved. And heed what they hear. And second, they can mine their data intelligently, as some already do. Most nonprofits are oversubscribed, and historical data may show which types of beneficiary respond best to their intervention, which they can use to target their work to maximize its effect.
Sounds a lot like what a lot of the best nonprofits already do: ask their communities. Isn’t that what these impact revolutionaries were trying to talk us out of? It’s messy, human, infused with opinions—not at all rational and logic-model-driven, as Bill Schambra is so fond of pointing out.
And, oh, by the way, maybe you can pick up research from elsewhere in your field that will help guide your own work. (By the way, many nonprofits are doing an increasingly great job of this, given the increasing accessibility of research generally.)
Nonprofits and donors should use research into effectiveness to inform their decisions; but encouraging every nonprofit to produce that research and to build their own unique performance management system was a terrible idea. A much better future lies in moving responsibility for finding research and building tools to learn and adapt to independent specialists. In hindsight, this should have been obvious ages ago. In our humble and now rather better-informed opinion, our sector’s effectiveness could be transformed by finding and using reliable evidence in new ways. The impact revolution should change course.
You had us almost convinced…until we realized that there are independent specialists involved. Isn’t that how we got here in the first place? I am a big fan and respecter of research done well. I agree we need to encourage and fund and circulate research at any number of levels, including in nonprofits themselves and by communities associated with nonprofits, and well-funded intermediaries can help identify research questions. What we don’t need, in my not-so-humble opinion, is a new group of independent specialists to tell us what the research should be, and what the research says and should mean.
What these confessions always miss is that once you say you are sorry, you probably should not lay out the next course of action you want everyone to follow.