The Latest Insights on Making Evaluations Useful

March, 2011; Source: Public/Private Ventures [PDF] | Foundation and government funders are increasingly demanding that nonprofits produce rigorous evaluations designed to demonstrate the validity and sometimes replicability of their programs and projects. What they don't often do is help nonprofits – affordably –generate evaluations that are useful to practitioners and communities to improve the programs being evaluated.

The nonprofit Public/Private Ventures has issued a new white paper with some useful thoughts to provoke a higher-level dialogue about nonprofit evaluation. Although clearly supportive of randomized evaluations using control groups that do not receive program services compared to those that do, P/PV is clear that there can't be a one-size-fits-all approach.

As an alternative, P/PV suggests the following: providing an array of alternative evaluation approaches when a randomized control group approach isn't feasible; developing "common systems of evaluative information at a reasonable cost"; developing (more) rigorous standards for scaling and replication (a common objective of randomized evaluation models); and getting practitioners into the process of designing evaluations so that the processes won't be excessively burdensome to nonprofit staff and the products might be likely to yield program improvements.

One of the examples cited in the paper is P/PV's Benchmarking Project in which some 200 workforce development programs have been sharing their experiences toward developing a common language of evaluation measures in the otherwise "fragmented" workforce development field. Hopefully, P/PV is getting its clients to heed its concerns about the "murkiness" of what is meant by "scaling up" and what evaluations tell nonprofits about replication possibilities.

There's a lot of unfortunate, simple-minded evaluation thinking out there that fails to deal with a major concern of P/PV's: "It is vital that we establish a better understanding of which methods of “scaling up” make sense in which contexts—and how to implement each method well. Among other things, expansion or replication often requires that a carefully honed, well-tested model be adapted to the differing conditions of a new site. Yet there is little solid research to help organizations strike an appropriate balance between adhering to an original model and adapting to local circumstances." Ain't that the truth!—Rick Cohen