Ricardo Millett brings two important perspectives to our evaluation dialogue. First, as the director of evaluation at the W.K. Kellogg Foundation, he is an important contributor to national initiatives on evaluation policy and standards of practice. Second, throughout his career–16 years of which were spent in Boston–Mr. Millet has been involved with community-based organizations. He shared his ideas about the future of evaluation policy, and how it will affect nonprofit organizations, with our own Molly Weis.
Program evaluation is becoming more prevalent among smaller organizations in the nonprofit sector, including foundations. This observation is based on the number of activities national umbrella organizations have launched that are designed to build the capacity of their membership. The United Way of America, for instance, has taken on the task of increasing the evaluation capacity of its affiliates. The Independent Sector, following three years of hard work with community-based organizations, has created a handbook, Evaluation with Power, which has been widely distributed to community-based nonprofits. And The Council of Foundations formed an affinity group, Grantmakers Evaluation Network, in addition to hosting a number of workshops and developing manuals and newsletters to help small to midsize foundations embrace this phenomenon.
There is a tremendous movement toward building organizational capacity to better use program evaluation. The trend seen in these organizations is that they recognize the limitations of quasi-experimental/random assignment models and are instead designing highly participatory approaches to program evaluation–ones that are especially responsive to the information needs of practitioners. This trend takes a programmatic approach, which Carol Weiss (1998) calls theory-based evaluation. Questions focus less on what happened in a program and more on why and how it happened.
Twenty years ago, it is fair to say that evaluation was designed to be most responsive to the information needs of funders. Evaluations were largely designed to determine the results of program efforts and, commensurately, the causal/attribution relationships between programs and outcomes. This approach fostered the preference for experimental, quasi-experimental, or random assignment approaches, whose primary objectives were to generate information that answered bottom-line questions: What is the bottom line? What difference do we make? Do our dollars get used wisely? On the other hand, theory-based evaluation–also known as empowerment evaluation–generates information to inform the program management function, as well: What can we learn that can improve current and future program implementation?
Random assignment/quasi-experimental evaluation models, which focus primarily on summative evaluations that judge the worth of a program, continue to be useful program evaluation methods. However, most of us are beginning to realize that for small and midsize groups–both foundations and agencies–which are often still in their formative stages, they are not appropriate evaluation tools. Rather, process or formative evaluation is more appropriate for these programs–a theory of change (or an empowerment approach) that focuses on finding out why a program works and how best practices can be built upon those successes and replicated over and over again and in other areas of the organization.
What will an organization look like after incorporating evaluation in its management system?
It will be more capable, confident, and knowledgeable on how to use evaluation information to improve the management of its agency and the various programs it administers. It will talk with its constituents, funders, and policymakers with a great deal more confidence in the value-added services that the organization provides, the impact it has on its immediate constituency, and the overall contribution it makes to the community. It will also possess an inventory, a database that will help it discern the kinds of programs and services it does best–and continues to improve–based on the systematic collection and use of evaluation data.
More critically, after incorporating evaluation in its management system, an organization will be better at defining reality. By this, I mean the social, political, and economic environment that affects the problems and issues the organization is trying to contend with. In the Cleveland Empowerment area, they are becoming more confident in their ability to work with more knowledgeable technicians to review evaluation designs, research criteria, and actively participate in defining these. It’s not that the practitioners define reality in unsophisticated ways or lower the criteria of success. In fact, given that the ultimate aim of all research evaluation is to get at the approximation of reality, if you have one set of people–the technicians–outside of that community developing and defining the criteria, you get less valid approximations of reality. So ultimately, this practice is healthy both for the practitioners who have an on-the-ground orientation, and for the researchers who have a more theoretical orientation. In the end, this collaboration results in a better picture of what they are trying to measure and solve.
Organizations should not have measures of “success” that grade their effort to deal with manifest problems. There’s a prevailing notion among cynics that nothing works; this fuels a number of evaluations that are commissioned to assess the effectiveness of organizations. More often than not, the programs that are designed locally and statewide are built on data that does not truly reflect the reality of what is happening on the ground. If organizations gain confidence in this very significant and sensitive function of defining reality for themselves, rather than giving that up to professional evaluators and researchers, these organizations will be better able to contribute to the design and the promulgation of programs that serve their constituents.
Have you seen any nonprofits defining their own reality?
Oh, yes. A parish in New Orleans, after five or so years of having researchers come in to assess the success of their efforts, declared a moratorium and prohibited evaluators, the state, and other researchers from coming in until after the organization reviewed its standards of evaluation. Since then, they have grown very technically capable of assessing the validity of the measures being used and have even offered some alternatives that outside researchers found most useful.
Sign up for our free newsletters
Subscribe to NPQ's newsletters to have our top stories delivered directly to your inbox.
By signing up, you agree to our privacy policy and terms of use, and to receive messages from NPQ and our partners.
In Battle Creek, Michigan, we have managed through the United Way and others, to develop a single application process, a definition of success within certain program areas, that was the result of a great deal of participation with local agencies. These defined standards and criteria are now used by most of the local, state, and even federal funding agencies; so this process of becoming more empowered in terms of defining measures of success can work.
Where is the evaluation information going when reported externally? To funders? How is it being used?
Many community organization advocates do not have experience in how to use evaluation/research information. More often than not, evaluation has been used to denigrate or negate the added value of programs that the community-based people think are worth a lot more than the evaluation suggests. We have not seen a great deal of community assistance from conventional evaluation/research practitioners. So the question of how the information is going to be used is still one of great concern to many. Nonetheless, it is another reason why community organizations should be more aggressive in participating in and demanding a role in the design or operationalization table. In terms of use, the United Way of America’s national effort seeks, in part, to use evaluation information to assure donor groups that their contributions are making a difference. They also have a more ambitious aim–to use the theory of change evaluation approach to facilitate community-wide focus on priority problem areas, and to develop common indicators to measure the impact of service organization’s efforts to deal with these problems. The United Way of America is experiencing much success with this outcome-based approach. But there are many challenges, as well. It is becoming more and more difficult to find common outcomes and common measurements for many program activities. There are simply not enough applied research tools or outcomes that can be universally applied; this is certainly a major challenge down the road.
Most of the programs and initiatives that we fund at the W.K. Kellogg Foundation, or for that matter in the community foundation world, are essentially very spontaneous, community-based responses to complex problems. They may get funding for a year or two; but the outcomes fall short. It could be that the program design has a valid theoretical base but has not yet fully matured; or it may have a strong design but insufficient resources, which limits implementation; or it may be just the opposite–poorly designed with strong implementation. That does not mean the program should not be tried again. The whole question of design and implementation speaks to the need to continually review both the theory of the program and implementation strategies to determine what worked best about this program, what will help it achieve over and over again the kind of results we want to see.
Replication comes after various program iterations or improvement cycles. The funder either believes the approach works or that the right agency is implementing it and therefore stays with the program long enough to see improvement; then it features the program for expanded public policy to other funders, cities, regions, or even the federal government. Unfortunately, many foundations are often shortsighted and fund programs for only two or three years.
It used to be the foundations who were the research and development laboratory for federal government and for national public policy. Through devolution this public policy responsibility is being left to the states, and because many states do not have the experience to figure out their relationships with the local and county efforts the arena for replication is not as well charted as it was in the past. Who picks up programs that work well? Who sustains programs that show promise but need to be debugged a bit more to become more fully matured?
There’s reason to be concerned that evaluation, as a profession, will revert back to the perspective it had 30 years ago–that is, to respond only to the piper that pays for the tune. Today many cynics view social service programs as not working. As a professional practitioner in this field, I think we need to develop evaluation approaches that are more sensitive and add value for the on-the-ground practitioner, or the utility of evaluation as a useful policy tool is just going to go down the tubes. I’m afraid that calls for increased accountability will result in the participatory approach being thrown to the sideline because it is as seen as too time-consuming, too complicated, and not sufficiently rigorous. Funders will go to the bottom line again and ask what questions, rather than inquire about progress and ask how and why questions. If we go back to that style, we will lose tools that improve management at both the funder level and at the service provider level.
Weiss, Carol. 1998. Evaluation, Second Ed. Upper Saddle River, NJ: Prentice Hall.
Gray, S.T. and Associates. 1998. Evaluation with Power: a New Approach to Organizational Effectiveness, Empowerment and Excellence. San Francisco: Jossey-Bass Publishers.