Reductionism in the So-Called Science of Giving?

Print Share on LinkedIn More


December 5, 2012; Source: New York Times

As we are in the end-of-year period when people traditionally think about making charitable contributions to nonprofit organizations and causes, we recognize that there are almost as many reasons to give as there are donors. But what donors give to and what they should expect to see from their gift has been a fairly hot topic, particularly with the growing emphasis on outcomes in the sector. As one article in NPQ has reported, there have been many studies and much investment in looking at what donors want. With no firm conclusions to be had, there are nonetheless legions of advocates who are wed to the idea that even if metrics are not what donors want as deciders in their giving, they should be. Now, in a blog on the New York Times website, Tina Rosenberg writes about a researcher from Oxford University, Toby Ord, who suggests that donors should give to those organizations that are the most effective in having an impact on the most significant issues (by the way, if you read this blog posting, do open up the comments section, as there have been some very interesting and intelligent responses posted there from around the world).

Ord’s argument is illustrated by looking at the average cost ($42,000) of training a U.S.-based guide dog, which does not restore eyesight and is helpful to one person. That same amount of money can pay for surgery to restore sight for more than 1,000 people at $25 each in Africa. Wouldn’t that be a much better use of the money? In essence, Ord is arguing that charitable contributions should be directed by a measurable rubric of impact. The rubric puts value on causes with measurable change, such as healthcare, and organizations that are the most cost-effective with low overhead. As the author of the blog points out, the rubric also assumes that social or health issues are more important that causes such as art.

Citing a lack of outcomes measurement from nonprofit organizations, Ord suggests turning to an organization like GiveWell, which has developed a list of the most effective nonprofits via a methodology of identifying critical causes that have some degree of consensus among academic researchers as to best practices for addressing the issues. GiveWell then finds those organizations that implement those best practices and measures that against the cost of doing business.

It’s perhaps a laudable sentiment for donors to want the most bang for their charitable buck. Of course donors never like to feel that their donations have been wasted. But would the proposed rubric have repercussions that are unintended and perhaps unwanted? For example, this approach values large numbers—organizations that help lots of people. But what about organizations that offer personalized, individual attention to fewer numbers, an approach that is often very effective? And what about the idea that large organizations often have a harder time innovating than smaller organizations, or the idea that there is some significant value in homegrown efforts that are “owned” locally? Wouldn’t the approach being suggested run the risk of severely limiting the number of charities that would receive support? In fact, these kinds of approaches to charity may end up backfiring in a number of ways since, as NPQ has reported before, studies indicate that when donors spend more time thinking about the impact and effectiveness of their gift, they are likely to give less generously than if they gave from the heart and on impulse.

Charity and giving often come from the heart and make us feel good. Sometimes we give just because, and that is not intended to be measurable. –Rob Meiksins

  • Simone Joyaux

    Measuring impact is important – and often very hard to do. I very much appreciate Jim Collins’ monograph Good to Great and the Social Sector. Collins expands the concept of measures beyond the quantitative to evidence. An orchestra might have as a goal to be “a great orchestra.” And the evidence measures would include: Reviews by critics. Being invited to record. Guest artists asking to perform with the orchestra. Conductors asking to be guests, etc.

    “Everyone” has to realize that “impact” is not measurable in the same way for nonprofits as it is for for-profit companies. And, that impact takes a lot longer with nonprofits who are trying to make societal change, stop poverty, etc. See Michael Edwards great little book called Small Change.

    And, of course, nonprofits need to understand their donors sufficiently well to know the right measures to use with each donor. Research shows that human decisions are triggered by emotions. See research by Drs. Bechara and Dimasio. That means that giving is stimulated by emotions. Giving is not merely some sort of IPO offering based on some prospectus that talks about investment impact. Find out what resonates with your donors. Tell stories about impact. Explain how the donor’s gift/investment produced the impact. Remember that the donor is the hero.

    Adrian Sargeant (UK) has done extensive research about building donor loyalty. He can tell us a lot about what donors want.

  • Robin Kinney

    I think that Foundations that are giving large chunks of money really do need to consider organizations with measurable outcomes and they maybe swing a bit too far in doing so sometimes. Individual givers, which make up the bulk of giving probably swing a bit to far the other way & give based on emotion & marketing and pay little attention to outcomes.

  • Jay

    Mr Ord’s argument about the seeing eye dog is compelling but one could also the same analogy that people would be better off not smoking because all the research shows how unhealthy that is. As it turns out people do what they wish to do, and do not follow anyone elses logic. Best practices for measuring outcomes of nonprofit work is not the driver to measure an organizations success with donors. Keeping donors and improving retention rates is the only statistic that is a predictor of how successful a nonprofit will be with supporters. Best practice in terms on measuring outcomes is no substitute for old fashion extraordinary customer service.

  • Liz

    It is important to remember that efficiency is one measure of effectiveness from many. An efficient measure of effectiveness requires a predetermined end, and that limits potential to promote possibility. That isn’t to throw the baby out with the bathwater. Research can be an important to for learning. Along this line, metrics would best be negotiated with people on the ground to resemble the goals they are trying to accomplish as much of those imposed by external sources. Without these considerations, philanthropy will go the way that it looks like higher education is now–the overemphasis on jobs as outcomes eclipsing so many other ways to think of human potential.