Points of Contention: Ken Berger Defends Impact Measurement

Print Share on LinkedIn More

alt Editor’s Note: Two weeks ago we published an op-ed from Steven Lawry, the Senior Research Fellow at the Hauser Center for Nonprofit Organizations at Harvard University called, “When Too Much Rigor Leads to Rigor Mortis: Valuing Experience, Judgment and Intuition in Nonprofit Management.” In this article, Lawry questions the growing trend among funders to require an increasing amount of assessment and measurement from their grantees.

In the following letter  Ken Berger, president and CEO of Charity Navigator, and author, Robert Penna, take exception to many of Lawry’s assertions. Berger and Penna believe that “charities owe both their donors and those they serve reliable demonstrations of positive impact.”

At a time when the state of public discourse can be experienced as a blast of simultaneous echo chambers, we offer this space to surface strongly held – but thoughtfully articulated – disagreements. In a sector that tends to shy away from self-reflection, we hope a safe place to respectfully disagree will help us learn from each other and grow.


We must take rigorous exception to Steve Lawry’s column, “When Too Much Rigor Leads to Rigor Mortis.” We believe that, at best, it is shortsighted; and at worst may be used by some as a dangerous justification for backsliding on the gains being made in the all-important area of performance management and measurement of nonprofit organizations (especially public charities).

It is unfortunate that Lawry, to begin with, did not make clear that his focus was on international efforts sponsored by Western social investors for application in other parts of the world.  Lawry’s contention that large Western donors often lack knowledge of the local context in other countries may be correct.  Whether through simple ignorance, cultural chauvinism, misguided faith in their own omnipotence or for other reasons, instances probably can be found where Western donors have incorrectly prescribed an intervention based upon faulty data or a misreading of the information at hand.  This is a situation that indeed does call for greater sensitivity and trust in local partners.  However, it has been our experience that, at least in the case of some larger international NGO’s, they are much further along in their development and utilization of impact measurement tools than many other parts of the nonprofit sector. Therefore, even in this instance we remain unconvinced that the problem is as pervasive as Lawry characterizes it.

More troubling, however, is the fact that this is not where Mr. Lawry began; but rather with a criticism of the notion of sound, verifiable impact assessment.  “Wise nonprofit leaders,” Lawry wrote, “know that the problems they work on are not susceptible to simple measurement.”  This opinion marks a very perilous road for the nonprofit sector to travel. Good measurement is what we all should be working toward.  Measurement tools such as primary constituent feedback, volunteer reviews, expert reviews, independent in-depth research and analysis, among others, all have value and provide different perspectives that can lead to a rich, multifaceted view of a charity’s performance. Some are simple measurement tools and others more complex. They vary in rigor, but all have a place in the spectrum of tools available to look more comprehensively at how a charity is performing. It is disheartening therefore, to hear Lawry claim “rigor mortis” when the “body” of tools is just beginning to catch hold and come alive!

Since it first emerged as a force in the nonprofit sector in the early 1990s, the outcomes movement has been dogged by numerous counter-claims that are, at their heart, excuses for not recognizing what many experts have been saying for some time: namely, that nonprofits have a critical obligation to produce nothing less than measurable, verifiable results on behalf of those they exist to serve.  But rather than accept this idea as self-evident, apologists for poor performance have variously argued that the work was too complex, the issues too intractable, the clients too burdened by multiple issues, and the cost of measurement too high for results measures to be applied to the efforts of nonprofits.  Instead, these voices continue to fall back on traditional indicators of activity to show how much they care and how busy they are.

Lawry cites certain “powerful donors” who have concluded that nonprofits do not, overall, make adequate use of impact assessment tools.  This belated realization is not something to be either feared or pushed back against. Rather, it should be a clarion call to all charities, those operating both here and abroad, to stop hiding behind excuses and finally show donors – individual, institutional, and governmental – precisely what they are getting for their money, and to demonstrate unequivocally what the American people are getting in return for the tax-exempt status these charities enjoy. What Lawry accomplishes now by suggesting that impact measures will stifle creativity is to offer yet another excuse for those who want it taken on faith that they are accomplishing anything.

We and many of our colleagues believe, in contrast, that this approach represents the thinking of the past.  The future must include offering informed donors information regarding not just the work a charity is doing, but the results it is achieving.  There are various efforts currently underway that give us confidence that this future will become a reality and we will not return to the dark days of excuses and obfuscation.

Lawry is correct when he writes that formal impact measures are often hard to come by. He is correct when he says that the good judgment of experienced managers is absolutely necessary in assessing impacts.  But the tools for helping those managers identify, work toward, and assess meaningful, sustainable, and verifiable impact targets do exist.  Rather than offering yet one more rationale for ignoring these tools, Lawry would better serve the sector by joining those of us who believe that charities owe both their donors and those they serve reliable demonstrations of positive impact, by helping us find ways of making these tools available to more nonprofits, and by welcoming the fully informed donor as the best donor any charity could have.

Ken Berger is president & CEO of Charity Navigator. Dr. Robert Penna is the author of  “The Nonprofit Outcome Toolbox,” consultant to nonprofits, and advisor to Charity Navigator.

For a deeper look into this ongoing discussion see this blog posting by Steven Lawry.

  • Paul Brest

    Steve Lawry’s broadside against impact evaluation is based on a confusion between the design and implementation of a strategy on the one hand, and its outcomes on the other. Design and implantation

  • Bob Untiedt

    I won’t attempt to engage all the thinking here, but just make four points:

    1. In writing of this type: “apologists for poor performance have variously argued that the work was too complex, the issues too intractable, the clients too burdened by multiple issues, and the cost of measurement too high for results measures to be applied to the efforts of nonprofits.”, the authors set up a straw figure – “apologists for poor performance” – as the reason that evaluation isn

  • Andrew Rowan

    I have many responses to both the Lawry OpEd and the reply by Berger and Penna but will confine myself to four points here.

    First, I have seen a great deal of argument about how difficult it is to measure outcomes. I would agree that many outcomes – especially those that take more then two years to come to fruition – are difficult, and in some cases (but not as many as are claimed), nearly impossible to quantify. However, I have usually felt that such arguments were mostly special pleading for not doing ANY meaningful outcome assessment. There is an unfortunate tendency in the non-profit world to shy away from objective reporting of outcomes and of using the argument that one’s measures are not perfect (show me a measure that is perfect) to do no real measurement at all. This is a field where “the perfect” is truly the enemy of “the good.”

    Second, I had the good fortune to attend one of the 5-day programs on non-profit management at the Harvard Business School. During the five days we were there, the faculty kept hammering away at the importance of measurement and outcomes assessment. At the end of the program, I went and asked the program director how they measured the impact of the 5-day program. I was told that they sent out questionnaires to attendees asking them what they got out of the program. I protested that such a questionnaire was merely a popularity survey and not a meaningful outcome assessment (were we better non-profit managers as a result of attending the program?). The director agreed but said that they had never done an impact assessment. I continue to wonder if they ever followed their own teaching.

    Third, I read the NonProfit Starvation Cycle and shortly thereafter attended a meeting for non-profit CEOs where the issue of administrative and infrastructure support arose. It was abundantly apparent from the conversation that, while most CEOs decried the lack of support for infrastructure and human capital development (and the message in the “Starvation Cycle”), few were actually doing anything concrete to challenge the notion that spending on such things is a waste. In practice, their organizations continued competing to see who could promote themselves as being the most “efficient” (namely, having the lowest overhead percentage)!

    Fourth, it is a considerable irony that the authors from Charity Navigator should be arguing for meaningful outcome measures. If CN described its service as an assessment of the “financial health and management” of non-profits, I would have less argument with their star rankings. CN does not do any meaningful program impact assessment, its recent “upgrade” notwithstanding. I am surprised they overlooked a relatively simple approach that would have produced useful impact assessments. For example, a simple polling of each organization in the Charity Navigator data base asking a senior manager at each one to rate all the other organizations in their category on a four-point scale would likely produce a relatively accurate score of actual “impact”. Crutchfield and Grant used such a polling system to identify their stable of high-impact non-profits in their analysis for Forces for Good and, while one might quibble that the selected organizations might not be the “best” in their respective fields (e.g. Teach for America, Habitat for Humanity, Environmental Defense), they clearly deserved to be near the top.