Editors’ Note: This article was adapted from a study commissioned by the California Endowment in 2005 entitled “The Challenge of Assessing Policy and Advocacy Activities: Strategies for a Prospective Evaluation Approach”1
During the last few years, The California Endowment has placed increased focus on policy and advocacy work. However, it has discovered that documenting the impact of the foundation’s work in this arena is complex, and the unpredictable nature of the policy environment poses unique challenges to the classic program evaluation strategies honed by assessing direct services. Therefore, The Endowment asked Blueprint Research & Design, Inc. to gather information from evaluation experts across the country and the foundation’s own stakeholders to guide it in developing an understanding about the issues in policy change evaluation, and recommend an approach to strengthening the foundation’s public policy change evaluation practice. The following article shares some of the early learnings from this “state of the field” report.
Funders and nonprofits involved in policy and advocacy work often struggle over ways to assess whether their hard work made a meaningful difference. It can take years of building constituencies, educating legislators, and forging alliances to actually change public policy. Therefore, foundations, which typically fund in increments of one to three years, have difficulty assessing the progress of grantees pursuing policy work. Moreover, foundations often want to know what difference their money made. Yet the murky world of public policy formation involves many players, teasing out the unique impact of any one organization is almost impossible.
During the last few years, a handful of foundations and evaluation experts have been crafting new evaluation methodologies to address the challenges of policy change work. Many more foundations and nonprofits are eager for help. At this moment in time, what methods, theories and tools look promising for evaluating policy change efforts?
Challenges in Evaluating Policy and Advocacy Grants
For the last 20 years, the social scientific method has served as the dominant paradigm for evaluations in the foundation world. A social services program identifies its change goal and the three to four factors or inputs that will stimulate that change. The program, for the most part, assumes there is a direct, linear, causal relationship between the inputs and the desired effect. Evaluation is then about quantifying those factors and measuring the impact.
However, as foundations have directed an increasing proportion of their grant dollars to public policy and advocacy work, they are finding that their traditional evaluation approaches do not work well. Seven key challenges include:
Complexity. Policy and advocacy grantees are trying to advance their goals in the complex and ever-changing policy arena. Therefore, the path to policy change is complex and iterative. In determining what actions will create change or how to assess progress, linear cause and effect models are not particularly helpful in trying to understand the nonlinear dynamics of the system.
Role of External Forces. Unlike social services grantees, policy grantees often face players actively working to thwart their efforts. It is more appropriate to hold a direct service grantee accountable for achieving certain outcomes because they have much more control over the key factors that influence their ability to achieve those outcomes. In advocacy work, a grantee’s efforts can build potential for a policy outcome, but alone cannot necessarily achieve a specific outcome.
Time Frame. Policy goals usually are long-term, beyond the horizon of a typical one-to-two-year grant. In most cases, there will be little or no actual public policy change in one year. It is therefore inappropriate to measure the effectiveness of most foundations’ one-year policy and advocacy grants by the yardstick, “Did policy change?”
Shifting Strategies and Milestones. In policy change evaluation, it is challenging to choose short-term outcomes and benchmarks at the onset of a grant and measure progress against them because the grantees’ strategies may need to change radically over the course of the grant. It requires discipline, attention and a deep understanding of the issues and the policy environment to craft an approach and a set of goals that are flexible without being merely reactive or haphazard.
Attribution. Most policy work involves multiple players often working in coalitions and, in fact, requires multiple players “hitting” numerous leverage points. In this complex system, it is difficult to sort out the distinct effect of any individual player or any single activity. Isolating the distinct contributions of the individual funders who supported particular aspects of a campaign is even more difficult.
Limitations on Lobbying. Misperceptions regarding the federal guidelines on nonprofit lobbying, as well as a foundation’s desire to avoid any chance of regulatory trouble, create challenges when trying to assess a funder’s role in advocacy work.
Grantee Engagement. Experienced advocates have many informal systems for gathering feedback about their tactics as they move along, however, most policy and advocacy grantees have little experience or expertise in formal evaluation. The data an external evaluator might want them to collect may seem burdensome, inappropriate or difficult to obtain, even if they do collect some data in their day-to-day work
Foundations have a long history of evaluating the impact of policy changes, as exemplified by the many studies on the impact of welfare reform. However, numerous experts noted even three years ago that few funders were trying to evaluate their efforts in creating or supporting grantees to create policy change. This lack of evaluation was due in part to the challenge in measuring policy advocacy, as discussed above and, in many cases, a desire to keep their involvement in this more politically controversial arena low profile.
There is now an increased desire from a number of funders to engage in a more rigorous unpacking of all the goals and tactics that grantees have used and examine what really was effective in achieving policy change. However, there is no particular methodology, set of metrics or tools to measure the efficacy of advocacy grantmaking in widespread use. In fact, there is not yet a real “field” or “community of practice” in evaluation of policy advocacy.2
Guiding Principles for Policy Change Evaluation
While practitioners do not share a commonly practiced methodology, there are at least seven widely held principles for engaging in effective policy change evaluation. The prospective approach outlined in this article is built on these principles:
Expand the perception of policy work beyond state and federal legislative arenas. Policy can be set through administrative and regulatory action by the executive branch and its agencies as well as by the judicial branch. Moreover, some of the most important policy occurs at the local and regional levels. Significant policy opportunities also occur during the implementation stage and in the monitoring and enforcement of the law or regulation.
Build an evaluation framework around a theory about how a group’s activities are expected to lead to its long-term outcomes. Often called a theory of change, this process forces clarity of thinking between funders and grantees.
Focus monitoring and impact assessment for most grantees and initiatives on the steps that lay the groundwork and contribute to the policy change being sought. Changing policy requires a range of activities, including constituency and coalition building, research, policymaker education, media advocacy, and public information campaigns.
Include outcomes that involve building grantee capacity to become more effective advocates. These capacity improvements create lasting impacts that will improve the grantee’s effectiveness in future policy and advocacy projects, even when a grantee or initiative fails to change the target policy.
Focus on the foundation’s and grantee’s contribution, not attribution. It is more productive to focus a foundation’s evaluation on developing an analysis of meaningful contribution to changes in the policy environment rather than trying to distinguish changes that can be directly attributed to a single foundation or organization.
Emphasize organizational learning as the overarching goal of evaluation for both the grantee and the foundation. View monitoring and impact assessment as strategies to support learning rather than to judge a grantee.
Build grantee capacity to conduct self-evaluation. To increase their use of formal evaluation processes grantees will need training or technical assistance as well as additional staff time to document what actually happened.
A Prospective Approach to Evaluation
Evaluations can be conducted as either backward-looking or forward-looking. A backward-looking—or retrospective—evaluation may collect data throughout the life of a project, but analysis and presentation of findings occur near the end or at the conclusion of a project, generally summarizing the actions, impact and lessons learned from the work. Most policy change evaluations conducted to date have taken a retrospective approach.
Retrospective evaluations can be very useful for understanding what has happened in policy change. Often, participants are more able to identify in hindsight which events made the most important contribution to change and the influence of environmental issues which may not have been as obvious in the moment. However, retrospective evaluation does have its drawbacks. For example, findings come after a project is completed, so they have limited value in helping the grantee or program officer refine strategies along the way. Also, there is a natural desire for participants to want to put a positive spin on their work, which inhibits their inclination to remember changes in strategies or aspects that didn’t work out as planned.
In contrast, a prospective evaluation sets out goals for a project at the outset and measures how well the project is moving toward those goals throughout the project’s life. Unlike retrospective evaluation, prospective evaluation can help a funder monitor the progress of a grant and allow the grantee to actively engage in program planning and use evaluation information to make improvements in its program as the program is in progress. It can also increase transparency.
In brief, prospective evaluation involves four steps:
• Agree upon a conceptual model for the policy process under consideration.
• Articulate a theory about how and why the activities of a given grantee, initiative, or foundation are expected to lead to the ultimate policy change goal (often called a “theory of change”).
• Use the “theory of change” as a framework to define measurable benchmarks and indicators for assessing both progress towards desired policy change and building organizational capacity for advocacy in general.
• Collect data on benchmarks to monitor progress and feed the data to grantees and foundation staff who can use the information to refine their efforts.
Finally, at the end of the project, all of the progress should be reviewed to assess overall impact and lessons learned.
Sign up for our free newsletters
Subscribe to NPQ's newsletters to have our top stories delivered directly to your inbox.
By signing up, you agree to our privacy policy and terms of use, and to receive messages from NPQ and our partners.
Setting Expectations
Prospective evaluation begins with all parties—advocates, funders (and outside evaluators, if used)—developing a clear understanding of the environment in which advocates are going to work, the change that the grantees and the funder want to make in the environment, and how they intend to make that change happen. Stakeholders can begin to understand their policy environment through a process of research and information gathering. For a small project, this may involve something as simple as gauging community interest and reviewing previous advocacy efforts on this issue. For larger projects or initiatives, more detailed research is probably appropriate. Depending on the project and the players involved, the foundation may even want to commission research papers, literature reviews, focus groups, or opinion polls.
After researching and understanding the environment grantees are working in and potential ways to effect change, the next step is to articulate how a grantee or initiative leader expects change to occur and how they expect their activities to contribute to that change happening. Originally, a very specific process for nonprofits, the idea of a “theory of change” has mutated and expanded over time. No matter how it is defined, at its heart, a theory of change lays out what specific changes the group wants to see in the world, and how and why a group3 expects its actions to lead to those changes.
A theory of change, no matter what it is officially called, is central to prospective policy change evaluation. In any prospective, forward-looking evaluation, a program’s theory guides the evaluation plan. Reciprocally, the evaluation provides feedback to a program as it evolves; it becomes a key resource in program refinement. Developing a theory of change forces clarity of thought about the path from a program’s activities to its outcomes. It helps all players develop a common understanding about where they are trying to go and how they plan to get there—be it a single grantee and program officer or members of a multi-organization coalition or strategic initiative.
Define Measurable Benchmarks and Indicators
Generally, the policy change goals in a theory of change are long-term and can take many years. Therefore, developing relevant benchmarks to track progress along the way is vital to an effective and useful policy change evaluation. Defining benchmarks at the beginning of an advocacy effort helps both funders and grantees agree upon ways to assess the level of progress towards achieving the ultimate policy goal. Indicators operationalize benchmarks, in that they define the data used to measure the benchmark in the real world.
Process versus Outcomes Indicators. Experts and the research literature distinguish among several types of indicators. One key distinction is between process and outcomes indicators. Process indicators refer to measurement of an organization’s activities or efforts to make change happen. Outcomes indicators refer to a change that occurred, ideally due in part to an organization’s efforts. Generally, process indicators lie largely within an organization’s control, whereas outcomes indicators are more difficult to attribute to a particular organization’s work.4
While process indicators are a useful tool in grant monitoring, they do not demonstrate that an organization’s work has made any impact on the policy environment or advanced the organization’s cause. Evaluation experts, as well as funders and grantees, emphasize the need to develop outcomes indicators that will demonstrate the impact of an organization’s work, such as increased awareness of an issue as measured by public opinion polls, an increase in the number of elected officials agreeing to co-sponsor a bill, the number of times an organization is quoted in the newspaper, or the increase in the number of people using an organization’s Web site.
Capacity-Building Benchmarks. Evaluation experts and funders with significant experience in policy and advocacy work emphasize the importance of identifying both capacity building and policy change benchmarks. Capacity benchmarks measure the extent to which an organization has strengthened its ability to engage in policy and advocacy work. Examples include developing relationships with elected officials and regulators, increasing the number of active participants in its action alerts network, cultivating partnerships with other advocacy groups, building databanks and increasing policy analysis skills. Capacity-building goals can have both process or activity indicators as well as outcomes indicators. These capacity outcomes are important markers of long-term progress for both the funder and its grantees. They indicate growth in an asset that can be applied to other issues and future campaigns.
Frameworks for Benchmark Development. Benchmarks should grow out of and be selected to represent key milestones in an organization’s or initiative’s theory of change. A number of groups have developed frameworks for developing benchmarks relevant to policy and advocacy work. These benchmark frameworks can provide examples of activities, strategies and types of outcomes associated with the policy process. Using an existing framework to guide benchmark development has several key advantages. First, it allows funders and grantees to build on the experiences of others and, hopefully, reduce the time and effort required. Second, reviewing several frameworks highlights different aspects of policy work—which can both serve as a reminder of key strategies to consider and expand people’s notion of what constitutes policy work. Finally, employing one of these frameworks will make it easier for foundation staff to compare progress and sum up impact and lessons learned across grants because they will be describing their outcomes using common categories.
Ultimately, developing benchmarks involves a process both of adapting benchmarks from standardized frameworks and creating some very project-specific indicators. The process begins by identifying one or more of the benchmark frameworks that seems the best fit with the project’s conceptual model for change. For example, consider a program to get local schools to ban junk food vending machines. Using the Women’s Funding Network (WFN) framework, one might identify several benchmarks (listed in the table below).
Developing benchmarks that fit into these standardized categories will make it easier to compare progress among groups. For example if a foundation funded 10 community coalitions working to ban vending machines in schools and all of them had benchmarks in these five categories from WFN, a program officer could more easily review the grantee reports and synthesize the collective progress of the group along these five strategies for social change.
After identifying outcomes benchmarks, grantors and grantees together can add capacity benchmarks. These might include: develop a contact at each school PTA in the community that is concerned about the junk food issue; develop a relationship with the school cafeteria workers’ union; or acquire and learn how to use listserv software to manage an online action alert network on this issue. Once all of these benchmarks are created, the next step is to develop methods to measure them on an ongoing basis.
Conclusion
Policy change is a long-term endeavor, so emphasizing a long-term perspective to program staff, grantees, and foundation board members is key. The prospective approach to advocacy evaluation can help make the long timeline for policy work more manageable. Staying focused on the long term is hard when one can’t see any progress. This prospective approach to policy change evaluation will allow funders and their grantees to conceptualize, document and celebrate many successes in creating building blocks towards ultimate policy change goals. It also will highlight the many ways that a foundation’s work is increasing the field’s capacity to advocate for policies for years to come.
Endnotes
1. Readers wishing to download the full study can do so at www.calendow.org/reference/publications/pdf
/npolicy/The%20Challenge%20of%20Assessing%20
AdvocacyFINAL.pdf
2. McKinsey and Company describes a community of practice as “A group of professionals, informally bound to one another through exposure to a common class of problems, common pursuit of solutions, and thereby themselves embodying a store of knowledge.” See www.ichnet.org/glossary.htm
3. The group may be a single organization, a group of individuals, or a collaboration of organizations.
4. In the literature, it should be noted many reports and evaluation manuals use the terms “benchmarks” and “outcomes” interchangeably. In this article, we make the distinction. “Outcomes” are the conceptual change goal. “Benchmarks” are the way to measure or assess if an activity has happened or a change has occurred.
5. Snowden (2004) has an appendix listing many useful examples of potential policy indicators. However, it is not organized around a well-conceived framework, so it was not included in the comparison of frameworks.
Six Frameworks
Each of the six frameworks listed below highlights somewhat different aspects of policy work, and may be used in different contexts from the others. In some cases, policy change is a primary focus, while in others, it is one of several social change categories. Some are most relevant to policy advocacy around specific issues, while others focus on community level change.
• Collaborations that Count (primary focus on policy change, particularly community-level change)
• Alliance for Justice (primary focus on policy change, most relevant to specific issue campaigns)
• Annie E. Casey Foundation (applicable to a range of social change strategies, particularly community-level change)
• Women’s Funding Network (applicable to a range of social change strategies, most relevant to specific issue campaigns)
• Liberty Hill Foundation (applicable to a range of social change strategies and a broad variety of projects)
• Action Aid (applicable to a range of social change strategies, particularly community-level change, and a broad variety of projects)
Each framework has distinct strengths and weaknesses. For example, the Alliance for Justice framework includes the most extensive set of sample benchmarks. However, it seems to be more a collection of typical advocacy activities rather than being a coherent theory about how activities lead to change, so it does not suggest sequence or relationship among the different outcomes. In contrast, the Women’s Funding Network (WFN) framework grows out of a theory about key strategies that make social change happen. It was developed through extensive research and experience working with social change organizations around the country. Moreover, it is the basis for an online reporting tool that walks grantees through developing a theory of change and then selecting benchmarks. Using the tool across a set of grantees would help program officers identify trends and sum up impact across grantees. However, the WFN tool itself has far fewer examples of ways to measure those outcomes than the Alliance for Justice or Annie E. Casey frameworks.
Four of the benchmark frameworks—Alliance for Justice, Action Aid, Collaborations that Count, and Liberty Hill—specifically call out advocacy capacity benchmarks. Capacity benchmarks could easily be added to benchmarks drawn from other frameworks.