When Too Much Rigor Leads to Rigor Mortis: Valuing Experience, Judgment and Intuition in Nonprofit Management

Print Share on LinkedIn More

alt

This article was originally published  on July 12, 2010 on the website of The Hauser Center for Nonprofit Organizations at Harvard University.

Several powerful donors have concluded that nonprofits make inadequate use of impact assessment tools.  They are backing up their arguments with an implicit threat: measure in particular ways or you don’t get the money.  Wise nonprofit leaders know that the problems they work on are not susceptible to simple measurement.  They know that the kind of formal impact measures some donors expect and management consulting firms prescribe are hard to come by honestly.  They collect various data all the time to inform their judgment and decision-making and to spur learning. Now, data collection (to donor-specified standards) is increasingly used for accountability purposes.

This may have the effect of reducing the degrees of freedom nonprofit leaders have to innovate and to pursue promising but risky ideas (without the fear that failure to prove one idea will poison their chances to learn from that failure and try something else another day).   As former Ford Foundation President Susan Berresford argues, insisting that grantees demonstrate measurable, short-term impact can have the effect of “miniaturizing ambition” for doing risky but potentially break-through work.

People who impose these restrictions confuse use of prescribed tools or achievement of certain outcomes as evidence of good management.  Sometimes they are. But, in and of themselves, they hardly constitute an impressive tool kit of good management practice.

The good judgment of experienced managers, deeply immersed in the complex social dynamics of the communities in which they work, is a formidable and essential resource in assessing impacts.  Experience and tested judgment also come into play in shaping a picture of the complex variety of social factors that might explain, for instance, why some poor children and not others attend school, or what mix of interventions are most likely to keep kids out of trouble with the police.

Effective nonprofit managers get information from a variety of sources: formal studies, observation of trends in behavior, feedback from partners and clients. They also draw on deep reserves of knowledge of the local social context, of cultural norms and values, and on the ability to empathize, to look at the world through the eyes of others.

These sources of knowledge are particularly important in shaping untested but potentially innovative, breakthrough approaches to social change. Effective leaders first and foremost seek to explain how a given problem is responding to a given set of interventions.  Data help describe what is happening, but the interpretative powers of managers are essential to meaningful explanation.

One of my favorite examples (see working paper here ) of the kinds of insights that arise from observation, judgment and experience is the particular knowledge that Muhammad Yunus gained from walking through poor communities around Chittagong University in Bangladesh on his daily walk to work.  His knowledge of rural Bangladeshi society, combined with his advanced training and powers of intuition, spawned his ideas on social lending, or what became known as micro-finance.


  SUBSCRIBE | Click Here to subscribe to THE NONPROFIT QUARTERLY for just $49


The invention of micro-finance demonstrates that breakthrough innovations, and even simple adjustments to well-established programs, are spawned by a variety of sources and intellectual attributes:  data, data intelligently interpreted, knowledge of the local and comparative contexts, and good judgment.  All four of these factors are essential to shaping development breakthroughs.  Donors should give greater weight to the latter three over the first in considering funding proposals.

A recently published book on the use of applied mathematics to help understand messy, hard-to-measure problems speaks to the importance of experience and judgment in making sense of limited data.  The book is “Street-Fighting Mathematics: the Art of Educated Guessing and Opportunistic Problem Solving,” by Dr. Sanjoy Mahajan.  Dr. Mahajan is associate director of MIT’s Teaching and Learning Laboratory and the book grew out of a course by the same name that Dr. Mahajan taught for several years at MIT.

The basic premise of his approach, set out in the books first sentence, is that “Too much rigor teaches rigor mortis: the fear of making an unjustified leap even when it lands on the correct result.”  Many real-world problems are not easily described with the kind of precision that professional mathematicians insist upon. This is due to the limitations of data, the costs of collecting and analyzing data, and the inherent difficulties of giving mathematical expression to the complexity of human behavior. In the face of these obstacles, mathematicians tend to do one of two things: insist on finding the true proof, even in the face of huge methodological constraints (rigor mortis) or give up.

Mahajan counsels a third-way: using mathematical reasoning to find a good-enough, approximate and usually valid and useful answer; or as Dr. Mahajan so adeptly puts it, “When the going gets tough, the tough lower their standards.” His book describes six tools for better understanding complex problems with limited data, including picture proofs, lumping, and reasoning by analogy.

There is wisdom in Dr. Mahajan’s core argument that is relevant to current debates about the place of impact assessment in program management.  Many problems, especially problems of social analysis, present huge problems of description and accurate measurement.  We can learn much of what we need to know by tracking a few data points, but knowledge of the underlying social forces and personal motivations that frame the decisions people make is essential to specifying what should be measured and interpreting findings wisely.

My concerns about the emphasis some donors give to evaluation and impact assessment lie not in their lack of value, but in a skewing of perspective.  I want to sum up with a few thoughts on getting the perspective in better balance.

* Knowledge of the local context and the insights spawned by that knowledge are hard won and accumulated over many years. External donors and many of their staff too often don’t possess such knowledge.  For large Western donors, reliance on data and impact measures can be a crutch, a substitute for the knowledge of local context they don’t have.

* Lack of knowledge of context contributes to an overreliance on one-size-fits-all interventions based on experience from elsewhere, resulting in poorly-adapted local project design.   An obvious remedy is to place greater trust in the leadership and judgment of people who live and work close to the problems; local educators, entrepreneurs, civil society leaders.

* Evaluation is first and foremost a learning tool, of greatest value as an aid to the judgment of program leaders and managers. The work of donors also stands to benefit from the knowledge that grantees gain in assessing changes within the communities they work and progress in pursuing particular goals.

* Of greatest relevance to predicting the merits and eventual success of a proposed grantee initiative are the wisdom, experience, judgment and reputation of the grantee organization and its leadership and staff.   These are the important qualities that should be considered when contemplating a grant.  (William Duggan’s book, “Strategic Intuition,” examines the qualities of leadership and management that spawn systemic impacts.)

* Donors who insist on short-term measurable impact should stay away from funding work that seeks breakthroughs on complex, long-intractable problems.

Steven Lawry is the Senior Research Fellow at the Hauser Center for Nonprofit Organizations at Harvard University.

  • Greg Cantori

    I think you missed the intent of evaluation – Used on the front end of a new idea it may indeed stifle innovation, but used on the back end of established programs and services it helps tweak them towards better effectiveness and efficiency

  • Larry Saxxon

    I think the author of this article is spot on relative to the potentially strangling effects of many of the most popular standard rigorous evaluation techniques.
    The presuppositions behind most evaluation processes are classic Newtonian paradigm which assumes that all phenomenons can, and should, be broken down to component parts and examined accordingly.
    That sounds great in theory however; people, societies, and human interactions tend to be far more complex, and indeed messy.
    What we need is a now narrative. ..New mindsets, as it were. Ones in which the local context of problem social issues can be examined through a new unique set of analytical lenses. That filtering process needs to be culturally competent, and as such should hold a basic understanding of the subtleties of the target audiences it intends to examine.
    We, particularly in America, also need to finally come to the realization that social problems

  • M. Littau

    As a long-term NGOer, I have seen the sad effects of this growing reliance on simple metrics to justify funding and verify accomplishments. It’s very true what Dr. Mahajan says about lowering standards: Nowadays, the form of an initial grant proposal is dictated more than anything by the numbers which need to be generated. Rather than strive toward the best (i.e. most helpful, useful, effective) outcomes possible, the proposal is designed around what outcomes can be most readily quantified. Any other outcomes are entirely dispensable. And you had better set that bar as low as possible, since failure to meet the numerical goals (widgets distributed, individuals contacted, hours logged) will bring dire consequences (i.e. defunding) – whereas failure to have any meaningful impact on a population/context is entirely irrelevant; in fact, using such metrics for evaluation, such failure becomes invisible. After a few rounds of this process – not getting funded for good but hard-to-quantify projects and getting funded for questionable but easily-quantifiable projects – one learns how to play the game. It’s all part of a seemingly inexorable process arising of two impulses: first, CYA (as long as my people are reporting their numbers, no one can criticize me) and second, dumbing-down (I don’t have to hurt my brain trying to make evaluative judgements of complex information – I just compare the numbers: Big number good, small number bad. So easy, a child could do it!)

  • rick cohen

    Do look at our comments in the Newswire about the proposals of the head of USAID regarding evaluation and evidence-based decision-making. USAID seems bent on a strategy of taking the most complex social phenomena that development contractors are trying to address and turn them into evaluable pieces like the evaluation of a Gates Foundation-funded vaccine test. Unfortunately, trying to improve governance in Afghanistan, for example, is hardly as neatly and classically evaluable as a scientific test of a vaccine compared to other treatments. What do you tell the control group, you’re not going to be able to exercise your emerging democratic rights? How do you distinguish the impact of democratic participation versus a progressive governor with good strategies and reservoirs of influence in a region of Afghanistan? While we need to develop the patience and narratives necessary for social change interventions, the approach being pursued at USAID appears to be completely in the opposite direction.

  • Doug Campbell

    Advocacy of the “powers of intuition” and of “educated guessing”, is an example of one of the many reasons that the nonprofit industry is burdened with America’s least competent managers and leaders. “Intuition” and “educated guessing” are just excuses / rationalizations for being too lazy or too incompetent to do the hard work or informational gathering and analysis necessary for rational decision making.

  • Alan Arthur

    I totally agree with the basic points made, but it is obviously important that endeavors be based on supposition that can be supported by some reasonable combination of data, observation, feedback, experience, or common sense.

  • rick cohen

    Alan: I think your take is exactly correct, and I don’t think the author didn’t eschew data and observation. Do look at my favorite, now very dated book on this topic, Charles Abrams “Development Projects Observed.” We never have perfect information about our work and in fact proceed, because we have to proceed, based on partial information. If we could generate all the information we absolutely needed to at the front end on most projects, we probably wouldn’t proceed because we would declare them impossible, impractical. But, as Abrams said, we go into projects with enough information and enough experience (and common sense) to tell us that it’s worth doing, and when the unanticipated or bigger problems arise, the “hidden hand of creativity” helps us get through and succeed. Do read Abrams, one of my longtime heroes.

  • M. Littau

    @Alan: Absolutely, claims about accomplishments and progress must be supported by a “reasonable combination of data, observation, feedback, experience, or common sense.” The point is that this is exactly what is being lost when agencies move toward total reliance on numbers-based reporting. These are not people who appreciate or remotely understand legitimate (& sometimes complex) social science research methods, they are bean counters. If you hand out 200 widgets, it’s twice as effective as handing out 100 widgets. If you speak with 200 people, it’s twice as much progress as speaking with 100 people. You say you can accomplish more by speaking with fewer carefully-targeted people? You could present a solid, detailed case for how you might demonstrate this based on “data, observation, feedback” etc., but the response is almost certainly to be: “But wait, 200 is more than 100, right? So how… huh? No, no, let’s keep it simple. If you talk with 200 people, it’s twice as good as with 100 people. This way when we explain it to Congress, they won’t have to think about it too much and get a headache. See how easy that is?”

    Eventually, you either do it their way, or give up.

  • Lorraine Teel

    My question is how do you move this conversation from amongst a group of nonprofit providers to the foundation world and government funders? Having been at this work for more years than I care to post, I have seen the requirements for demonstrating impact move from some reasonable measures made in agreement with the nonprofit to an unrealistic set of measures set by individuals who are frankly clueless about the work. During a recent strategic planning session, I came to the conclusion when challenged by the facilitator to identify the primary “customer” or client for our agency, it was the government. Of course I understand that it is a quid-pro-quo arrangement, I accept the funding and fully agree to deliver ‘results’ — it’s the definition of what those results are that needs to be proportional to the funding received. Simple.

  • Bob Untiedt

    Doug,

    Can you also give us a copy of the studies you’re aware of which document how the nonprofit sector has the worst leaders? I can’t believe that you’d do some kind of ‘educated guessing’ about that – surely you’re hard-working and competent enough to document a claim as obviously stupid as that.

    I think that genius is randomly distributed, and that there are lousy AND great business, government, and nonprofit leaders. If you KNOW differently, please educate, rather than insult.

  • C Paulin

    This article raises many important issues and is a conversation we definitely need more of! Social issues are much more like raising a child(complex)than building a rocket (complicated). When you build one rocket, you are much more likely to be successful with the next – every component can be figured out and solved. When you raise one child well, there is no guarantee that you can do as well with the next – every component can not necessarily be figured out and solved. If you run a successful drug treatment program in one place, there is no guarantee if you do the same thing in the next city over it will work, and there is no guarantee if you do the same thing 20 years from now as today that it will still work. I would encourage people to check out the Tamarack Institute’s work around developmental evaluation (http://tamarackcommunity.ca/)which seems to be a much more sensible approach.