This “Voice from the Field” comes from a particular foundation initiative that NPQ has written about previously in some detail. For more information on Edna McConnell Clark’s Capital Aggregation project, you can go here and here.
As a longtime funder of evidence-based programs and rigorous evaluations, the Edna McConnell Clark Foundation (EMCF) has learned from experience how difficult it can be to build an evidence base and extract the maximum benefit from evaluation. All of EMCF’s 19 current grantees are undergoing or have completed rigorous evaluation.
Just as every youth-serving organization or program is unique, so every evaluation presents its own challenges and opportunities. One size does not fit all. But over the years, we have made some general observations and learned some lessons that we hope policymakers, funders, and nonprofit practitioners may find helpful as starting points when they consider how to interpret and use evaluation findings.
Sign up for our free newsletters
Subscribe to NPQ's newsletters to have our top stories delivered directly to your inbox.
By signing up, you agree to our privacy policy and terms of use, and to receive messages from NPQ and our partners.
- Rigorous evaluation is risky business. The more rigorous the evaluation, the less likely it is to show dramatic effects. A well-designed randomized controlled trial (RCT) identifies impacts—measurable changes that can be attributed primarily if not entirely to the program under evaluation. It is important to distinguish impacts from outcomes—changes that may have had other causes in addition to the program. An RCT seldom demonstrates effects that are as statistically impressive as those of less rigorous evaluations, which generally are unable to attribute results entirely to a program. By their very nature, most RCTs, whether they are of social interventions or medical drugs, show small impacts, if any, and such findings are frequently misinterpreted or misunderstood. Yet these impacts, however modest they may appear, are more significant and reliable than the greater effects less rigorous studies often show. (For a helpful discussion of impacts and outcomes, see pages 7-9 of Michael Bangser’s “A Funder’s Guide to Using Evidence of Program Effectiveness in Scale-Up Decisions.”)
- Yet the potential benefits far outweigh the risks. The funders and leaders of service organizations are committed to making the world a better place. For many of them, evaluation is a moral imperative, because it helps them understand what works and how to make it work better. We believe that organizations that undertake rigorous evaluation deserve admiration, support and patience, because we know this can lead to better outcomes for families and youth in need.
- Rigorous evaluations are the surest way we know to establish whether a program works. Two or more rigorous evaluations, including RCTs, when they are conducted at different locations and incorporate cost-benefit analyses, make a powerful case for supporting a program, especially when money is tight and private and public funders are increasingly interested in directing their limited resources to evidence-based programs. As David Bornstein, author of How to Change the World, noted with regard to federal funding in The New York Times, “Support for evidence-based policy-making got its initial push during the Bush administration. Under Obama, it has taken off. The administration has linked billions of dollars in funding to programs that demonstrate evidence of effectiveness.” Even so, less than one percent of federal funding, according to Moneyball for Government, goes to programs supported by evidence.
- The goal of evaluation is not just to prove that a program works; as much, if not more, it is to learn and improve how a program works. Evidence-building, we firmly believe, is an ongoing, dynamic process deeply embedded in the organizational culture of high-performing nonprofits (and for-profits, for that matter) that are dedicated to learning and determined to continually improve their performance and increase their impact. Brian Maness, CEO of Children’s Home Society of North Carolina, an EMCF grantee, expressed this well when he recently said in conversation, “We conduct evaluations not only to have confidence in our impact and overall success, but also to uncover ways we can improve. Children’s Home Society is a learning organization, one that is constantly refining our approach and applying new knowledge in our work with children and families.”
- Evaluation findings are always mixed. No evaluation or program is perfect, whether it be because the evaluation is ill timed or poorly designed, the program implemented inconsistently, or the performance data insufficient. Still, a thoughtfully designed evaluation, regardless of whether its findings are encouraging or disappointing, almost always reveals things that can be improved. The most important thing is to learn from the findings and take actions on the basis of that learning to improve a program and its impacts on young people.
- Impacts inevitably vary. A program that works for one population in one geography may not work as well for another population (middle-schoolers vs. primary school students), in another place (urban vs. rural), at another time (before or after the widespread availability of smartphones, for instance), or when delivered under other circumstances (after school vs. during school hours). According to Robert Granger, former president of the W.T. Grant Foundation and current chair of EMCF’s Evaluation Advisory Committee, “Research results may vary across sites or studies because conditions differ and those conditions shape the results.” Therefore, evaluating a program as it is conducted under various sets of circumstances can help an organization adjust it accordingly and demonstrate the degree to which the program is generalizable, building confidence that if the program is effective in one context it is likely to work in another.
- Finally, continual evidence building keeps an organization on the cutting edge of innovation. As times, needs, funding, demographics, implementation and implementers inevitably change, a program must be assessed regularly to ensure it keeps up with these changes and adapts to them. And continual evaluation is the best way to ensure continual innovation, for how else can an organization that tries to do something new or differently determine whether its innovation is successful and discover ways to refine and extend it?
Kelly Fitzsimmons is Chief Program and Strategy Officer for the Edna McConnell Clark Foundation.