Evaluation. The very word means different things to different people. According to the dictionary, the verb evaluate is derived from the Latin evaluar meaning “to judge the worth” or “to measure value.” In his book, Utilization-Focused Evaluation, Michael Patton offers this definition:
Program evaluation is the systematic collection of information about the activities, characteristics, and outcomes of programs for use by specific people to reduce uncertainties, improve effectiveness and make decisions with regard to what those programs are doing and affecting.
For nonprofit managers, these meanings provoke anxiety. Even the less ominous assess or appraise make most of us a little nervous. In these days of outcome-based funding, foundations’ heightened interest in accountability, and their quest to understand what impact their dollars are having, leaders of nonprofit organizations are often left scratching their heads wondering how to satisfy funding sources.
Here, I hope to impart some knowledge to nonprofits, particularly to assure you that philanthropies’ notion of evaluation is evolving, and to offer some ideas on how your funding sources can become more equitable partners in the search for answers.
In the following passages I will outline, from my perspective, the principle objectives that foundations hope to realize through evaluation. These evaluation objectives also form a continuum: from ensuring accountability through monitoring, to determining quality and impact, to achieving continuous improvement. Foundations lie on different points on this continuum depending on a variety of factors: their geographic scope, what type of foundation they are–community, private family, private national, or corporate–the size and scope of their grantmaking programs, and the size and experience of their staff.
I think an important place to start our exploration is to answer the question of what ends foundations are most interested in achieving through evaluation. Let me also clarify that evaluation can entail a wide range of assessment activities, from quarterly monitoring reports and final reports, to interviews with recipients of past grants in a particular subject area, to independent third-party summative or formative evaluations. For more information on the range of evaluation approaches see Carole Upshur’s article.
In my experience, foundations have three primary objectives: accountability, assessment of impact, and performance improvement, including both the performance of grant recipient organizations and the foundations’ own performance. Related to the assessment of impact is a sub-objective, which evaluation meets for certain foundations, and that is to enable foundations to use data to inform public policy in areas they consider to be critical to community, state, or national well-being.
Most funders want to know if a grantee agency accomplished what it set out to and, if not, why? Increasingly, some are genuinely interested in understanding why agencies fail, seeing value in lessons learned, either from mistakes or faulty program assumptions or logic. In fact, foundation board members or trustees, who are frequently businesspeople skilled in assessing the reasons for new product or services failures, are often frustrated because they don’t hear enough about what hasn’t worked–they wonder whether their grantmaking institutions are taking enough risks.
As a program officer for two fairly large community foundations in Connecticut (the Hartford Foundation for Public Giving and the Community Foundation for Greater New Haven), I found that while board members felt they got sufficient up-front information when a grant application was being considered, there was little time set aside at regular board meetings to discuss what we were learning from our grants in progress. To address this desire for reflection, the staff at the Community Foundation for Greater New Haven began developing a monthly synthesis of the quarterly reports we received from grant recipients and from our site visits. We offered highlights and lowlights to inform the board of the successes and failures our grant recipients encountered. We also organized forums on public policy issues, such as affordable housing, and invited some of the organizations we supported to discuss the effect these policy issues were having on the community. Note that the illustrations I offer here relate accountability to the regular monitoring and reporting work that foundations often do.
Questions of program efficacy, effectiveness, and results drive this second objective. At times foundations might request this information to determine whether or not to renew funding, which can create a contentious relationship between funders and grantees, one that most prefer to avoid. Nonetheless, there are instances when use of this information is warranted, such as when a new program model is being introduced into a community, or when an organization with a spotty record of achievement is given an opportunity to demonstrate its potential.
At other times, a foundation might want to understand the impact achieved, the principles and practices that contributed to that achievement, and how the program or service models might be replicated in other settings. Knowledge development, with the goal of improving both programs and public policy, often characterizes the research agenda behind national funding initiatives such as the Ford Foundation’s Neighborhood and Family Initiative, the multi-foundation National Community Development Initiative, and the Annie E. Casey Foundation’s Jobs Initiative and Community Building Initiative. These funders understand the strong need to produce information that is both usable and used, given the current era of reduced resources, not only for programs and services but for research and evaluation as well.[1].
Evaluation is sometimes viewed as a tool for improving the performance of a program or organization. Over the past decade, for instance, the community development field has seen an increase of joint funding efforts between private funding sources, who provide operating support often accompanied by capacity-building training and technical assistance. These operating support collaboratives typically require organizations that are requesting funds to undergo an independent organizational assessment, which examines strengths and weaknesses in all areas of governance and functioning. The goal of these assessments is to help the organization develop a plan to address its most pressing problems and to highlight where the operating support and technical assistance or training might be most beneficial.
As this example illustrates, some foundations see value in program or organizational evaluation that facilitates capacity-building in their funded organizations. They view evaluation as a tool for organizational learning–one that protects their investment and drives the program toward success. Still, some evaluators argue that evaluation is evaluation, and technical assistance is technical assistance. Those who argue for scientific or experimental rigor in evaluation question the legitimacy of foundation approaches that use evaluation to influence program outcomes.
Conversely, in the case of national foundation initiatives in which local sites receive technical assistance and also participate in the evaluation, there is often a disconnect between the two. The experience of the Casey Foundation’s Children’s Mental Health Initiative in Miami is a good example. Researchers at the University of Miami were considerably frustrated that their evaluation findings weren’t sufficiently incorporated into the technical assistance support plan for that site. For example, a finding that conflict within the governance council was related to inadequate communication between the council members and the organizations they represent should have been followed by a recommendation that technical assistance be offered to identify alternative communication methods; perhaps some training should be offered as well. According to the University of Miami’s Dr. Marcela Gutierrez-Mayka, a lead evaluator of the Casey Mental Health Initiative, it is the foundations who should make the linkage between evaluation and assistance. To leave the connection solely up to the discretion of evaluators or technical assistance providers—or to serendipity—means a missed opportunity for program improvement.
This tension between evaluators, technical assistance providers, and foundations suggests that community organizations or coalitions of organizations are often caught in the middle. Organizations generally invest considerable time and commitment to the evaluation process, whereas funders have designed evaluations into their initiatives or grants. Sitting for interviews and collecting data are but two time-consuming tasks community organizations must undertake in the evaluation process. It seems reasonable, then, for local organizations to throw accountability back to the foundations–let them spell out how evaluation and technical assistance should work together and complement one another. In this way, the foundation’s need to acquire knowledge and the community’s need for improvement can both be met.
One might argue that the power dynamics between foundations and communities would preclude such a throw back. Issues of race, class, culture, and power are certainly at play in the relationship between communities and foundations, whether the foundations are local or national. Beyond this, and particularly in the case of regional or national foundation funding initiatives–such as the Casey Foundation’s Plain Talk or Robert Wood Johnson’s Fighting Back initiatives–communities all too often see dollar signs more than anything else. And they often don’t seriously participate in the evaluation component of the initiative, thus sacrificing potential gains. For engagement–not merely involvement—to occur at the community level, key players must attempt to level the playing field. Consider the result in Denver, for example. The community threw back on the Casey Foundation a set of program requirements it felt did not make sense. This resulted in reflection on the Foundation’s part and a new level of partnership emerged between the two. It is certainly possible that the outcome might have been for the Foundation to withdraw from that community, but instead it saw the community’s response as evidence of engagement. Power dynamics do exist, particularly in communities of color, and they are frequently the subject of foundation funding initiatives. The trick is to be resourceful enough to find ways to balance the power.
Sign up for our free newsletters
Subscribe to NPQ's newsletters to have our top stories delivered directly to your inbox.
By signing up, you agree to our privacy policy and terms of use, and to receive messages from NPQ and our partners.
Performance improvement involves clarifying goals, objectives, and strategies, and making mid-course adjustments. The William Caspar Graustein Memorial Fund, a statewide family foundation that focuses on children and education reform in Connecticut, routinely asks the organizations it funds how it, as a funder, might have done something differently to provide better service to the organization. The fund takes the answers seriously, regularly looking for patterns in the responses and adjusting services accordingly.
Foundations also use evaluation to improve grantmaking in a particular subject area. Years ago, when I was working with youth mentoring programs at the Hartford Foundation for Public Giving, one such program taught me that mentoring relationships between young people and professionals would be difficult to sustain if the staff did not actively support the individual matches. This lesson resulted from a major program investment that largely failed, but based on evaluation it did lead to subsequent improvements in the Foundation’s future mentoring programs.
Foundations also often use evaluation findings to help them refine their strategies in large-scale initiatives. Some of the most interesting work I’ve done as a consultant in recent years has been with the William Caspar Graustein Memorial Fund to “get smart” about the challenges of evaluating the impact of comprehensive community collaboratives. The Memorial Fund sponsors an eight-city initiative focused on deepening the engagement of a cross-section of the community to increase the readiness of children for school. The Children First Initiative, as the Fund’s work in this area is known, is particularly concerned with bringing the grassroots parental voice front and center in these communities. “We’ve found through evaluation of CFI and through linking technical assistance resources to the CFI communities, that parents are the stakeholder group the cities are having the hardest time fully engaging,” says Maria Mojica, senior program officer at the Memorial Fund. This difficulty derives from a failure to understand that parents perceive professionals as intimidating and “comprehensive community collaborative” approaches as vague. “As a result,” says Mojica, “the Fund has redoubled its commitment to ensuring parental voice, input, and incorporation, and we’re exploring different approaches for effectively engaging parents in our future work.”
—
In this article I have tried to describe the continuum of evaluation objectives along which philanthropic foundations hover. I must emphasize that in spite of its burgeoning interest in asking evaluative questions and in becoming smarter about the potential and limitations of evaluation, the foundation field varies widely in its interest, capacity, and disposition toward investing in evaluation. In the early 1990s the Council on Foundations reported the development of a Grantmakers Evaluation Network made up of all types of foundations with varying degrees of experience and sophistication in the evaluation field. The Network has held two well-attended national conferences focused on helping its members become more knowledgeable and strategic about evaluation use. The American Evaluation Association has a topical interest group geared toward the interests of foundations. Other AEA interest groups also include evaluation issues for nonprofits and minority communities. The Independent Sector, a national membership organization for nonprofits (including foundations), is a valuable resource on evaluation as well.
There are numerous opportunities for nonprofit organizations to work with their funders as partners in evaluation. Here are some of my thoughts on how nonprofit leaders and foundations can build such evaluation partnerships.
Nonprofit leaders are likely to encounter local funders at all points on the continuum I have just described. One suggestion I would offer is to become master of your own destiny where evaluation is concerned. Consider the ways in which your programs or organization can improve through evaluation. Today, many good programs are vulnerable to funding cuts, but being able to demonstrate how your programs and services are affecting people’s lives can give you a competitive edge. Legislators respond to the human face of issues. If, for example, you can put in place evaluation approaches that track the people you serve through service systems, and demonstrate how some of those systems fail them, you can use that information to influence policymakers. And sharing evaluative results with your funders can create entirely different kinds of conversations with them, perhaps leading them to rethink their level of investment in a particular subject area or to consider joint funding of a new initiative. Who will fund your research and evaluation proposals? The Foundation Center has recently published a directory of evaluation funders, but your local funders may be the best place to start.
If you participate in the funding initiative of a local, regional, or national foundation, you might also be involved in cross-site evaluations, a potentially powerful role. These evaluation efforts usually clarify evaluation questions, identify the often multiple audiences for the evaluation’s findings, refine the evaluation design, and articulate data collection requirements. Get involved in these efforts so that you can ensure that the process represents your community’s best interests.
One of your most valuable contributions is to challenge prevailing wisdom about outcomes and indicators. One of my greatest worries is that foundation boards and management often want to see hard outcomes too soon, before programs can actually deliver them. Work with evaluators and funders to identify appropriate markers of progress that will provide evidence of whether your program is headed in the right direction. Furthermore, we need to work harder at understanding (and accepting) that for different ethnic and racial groups, program outcomes may be viewed differently, depending on the cultural, social, and economic framework of the groups involved. Nevertheless, these different views of outcomes are still valid. For example, in Latino and African American contexts, outcomes are more likely to be based on relationships between individuals; this is less likely to be the case in Anglo or Asian contexts. Thus, an interim indicator of progress for people in an employability program or initiative might be their increasing creditworthiness among lenders in their own community. Evaluators and their funders need to understand and consider these characteristics to interpret evaluation results correctly. Those working at the front lines can sometimes offer a different, but extremely valuable, perspective.
In summary, I’ll leave you with a challenge. The fields of evaluation and private philanthropy have found each other and are engaged in what is, at times, an awkward dance. Between them lie some exciting possibilities for the nonprofit sector. This is not to say that capitalizing on the possibilities will be easy, but since when have nonprofit leaders been squeamish?
1. “Getting Smart, Getting Real” a Report of the Annie E. Casey Foundation’s 1995 Research and Evaluation Conference, pp. 19-23.
About the Author
Frances Padilla, M.A., helped build evaluation into grant making strategies while working at the Hartford Foundation for Public Giving, one of the ten largest community foundations in the country, and Community Foundation for Greater New Haven. More recently as president of New Paradigms Consulting Inc, in New Haven, Connecticut, she seeks to help program professionals, funders and policymakers clarify their goals, identify essential program components, and adapt best practices within community development.
Suggested Readings
Bickel, W. E., R.T. Eichelberger, and R. A. Hattrup. “Evaluation Use in Private Foundations: A Case Study.” Evaluation Practice. Vol. 15, No. 2 (1994).
Connell, J. P. et al. 1995. New Approaches to Evaluating Community Initiatives: Concepts, Methods and Contexts. New York, NY: The Aspen Institute.
Stockdill, S. H. 1998. How to Evaluate Foundation Programs. Golden Valley, MN: EnSearch.
Williams, Harold S. “Learning vs. Evaluation.” Innovating. Vol.1, No 4. (Summer 1991), published by the Innovation Group, The Rensselaerville Institute, Rensselaerville, NY.
Patton, Michael Quinn. 1986. Utilization-Focused Evaluation, Second edition. Sage.