Introduction
The last fifteen years have seen several dramatic shifts in the world of philanthropy. Not only have new, large donors emerged, such as the Bill & Melinda Gates Foundation, the Broad Foundation, and the Walton Family Foundation, where we work, but many of these foundations have also taken an unapologetic stand that maximizing the impact of every dollar invested is essential. Leveraging each dollar optimally for public good requires having a sound evaluation strategy to guide and inform investment decisions. But understanding the impact of philanthropic investments can be complicated when seeking to reform large, complex systems, such as K-12 education in the United States or fisheries management around the world. Nevertheless, a foundation can gauge and improve its impact through an operational and financial commitment to evaluation, investment in building staff capacity, and a willingness to ask hard questions and change course based on the answers. In this article, we describe the concept and practice of strategic philanthropy, lay out the essential role that evaluation and impact measurement play in that approach, and share the framework and process that we have developed for measuring results at the Walton Family Foundation (WFF). Along the way we provide examples from the foundation’s work, particularly in K-12 education reform, to demonstrate the principles and ideas discussed.
What Is Strategic Philanthropy?
Paul Brest, the former head of the Hewlett Foundation, has described strategic philanthropy as “philanthropy where donors seek to achieve clearly defined goals; where they and their grantees pursue evidence-based strategies for achieving those goals; and where both parties monitor progress toward outcomes and assess their success in achieving them in order to make appropriate course corrections.” In short, it is an investment approach defined by goal-based planning and the measurement, reporting, and use of evidence to inform decision-making. The notion that some foundations seek to create strategic plans to guide giving and to measure impacts to some degree is admittedly not new. That said, the strategic philanthropy approach that we describe here has evolved in many ways from approaches taken in the past and draws on lessons from previous experiences with using evaluation to inform grantmaking in ways that can drive greater foundation impact.
The first step for the donor practicing strategic philanthropy is to develop a theory of change for the foundation’s work. A theory of change is a road map that articulates what specific changes in the world the foundation will accomplish and then connects, in a detailed way, how investments will lead to those changes. While theories of change necessarily include assumptions about how the world works, to be sound, they must be grounded in research and evidence to the greatest extent possible. Put differently, a foundation must first generate its own set of goals and strategies for accomplishing them. For example, at WFF, our theory of change in the K-12 focus area is that the best way to improve student outcomes, particularly in low-income communities, is by allowing families to choose among multiple types of high-performing schools. In this scenario, not only will individual students benefit by having choices, but all schools will be incentivized to improve in order to attract students.
The second step in strategic philanthropy is turning the foundation’s theory of change into a set of grant investments. The central challenge for program staff at a foundation employing the principles of strategic philanthropy is in identifying potential grant partners who share the foundation’s vision and then empowering those partners to use their expertise in implementing projects and programs to achieve the foundation’s goals. Once those partners have been found, the creation of grant agreements is built around the establishment of clear, rigorous, and objectively verifiable performance metrics. Building in an unambiguous, shared vision of success on the front end, before the grant is made, enables everyone—foundation, grantee, and evaluator—to know what the goals are, how they will be achieved, and how effectiveness will be judged.
Once partners have been found and performance metrics established for the work they will do, the third step is using the results of those metrics and other sources of evidence to determine how much progress has been made toward meeting goals. This analysis should occur at the grant level, where grantees provide assessments of progress against the targets set in their grant agreement; it should also be conducted at higher, aggregate levels, such as the foundation’s performance against a set of indicators that it has selected to benchmark its own progress toward achieving larger scale impacts.
The fourth step brings the process full circle by employing the evidence of grantee and foundation performance in service of better decision-making. A critical aspect of strategic philanthropy is that there are processes that enable the integration of evaluation information to facilitate continuous organizational learning and adjustments to strategy. Collecting data and evaluating progress toward meeting goals is important for maintaining diligence and transparency for the foundation. But to maximize impact, that information must also play a role in how the foundation revises its strategy and grant portfolio.
Like any other approach to giving, strategic philanthropy is not without its detractors. William Schambra of the Hudson Institute is among the most prominent critics; he argues that strategic philanthropy mistakes measurement for knowledge, fails to adequately take into account the complexity of the social problems it seeks to alleviate, and creates disincentives for nonprofits to take on the most challenging problems for fear of having to report failure and potentially lose funding. He adds that by asking for a clarification of strategy, a definition of success, and a measurement approach, strategic philanthropists imply that they have greater knowledge and understanding of a problem than those who work on the front lines delivering services. Similarly, Pablo Eisenberg of Georgetown University refers to “philanthropic arrogance” when he argues that such foundations engaged in strategic philanthropy insufficiently incorporate the views of community actors into decision-making processes. In this view, insularity of foundation staff and board members creates a grave power imbalance that limits effectiveness.
Although these criticisms often exaggerate and distort the principles of strategic philanthropy (for example, by insisting that it ignores the complexity of social problems), they are not entirely without merit. The critics are right that foundations need to be willing to acknowledge that they do not have all the answers or that there is one best way to solve a problem. Similarly, it is important that grantees be treated as partners, not simply vendors executing on a foundation’s behalf. But there is nothing inherently arrogant or disrespectful about a foundation wanting to maximize the good it creates through its charitable investments by using sound due diligence and evaluation approaches. To the contrary, by seeking to find the most capable partners, and by engaging with them in thoughtful strategic planning processes that create shared, measurable visions of success, the strategic philanthropist seeks to use its resources to create the greatest amount of social impact possible.
Responsible philanthropists do not need to measure everything, but they cannot maximize impact if they are unwilling to hold themselves and their grantees accountable for results. As Paul Brest and Hal Harvey explain, “This is not about trivializing grantmaking to achieve only readily quantifiable outcomes or trying to measure the unmeasurable. Albert Einstein got it right when he noted: ‘Not everything that counts can be counted, and not everything that can be counted, counts.’” Instead, it is about measuring what matters and using that information to inform partnerships that can create the greatest difference for the greatest number.
The Essential Role of Evaluation and Impact Measurement
The foundation of strategic philanthropy is built on evaluation and impact measurement using the best evidence available. The strategic philanthropist seeks to use objective data and analysis to the greatest extent possible, always aware of its limitations, as the primary means of informing decisions—about foundation strategy as well as what grants to make, to whom, and where. As such, some key pieces must be in place when implementing a strategic philanthropy approach:
- An overall evaluation framework
- A grant performance measurement process
- A process for grantees to report performance to the foundation
- A process for disseminating evidence/results within foundation
- A process for incorporating evidence/results into future grant decisions and strategic planning
An evaluation framework lays out the basic principles of a foundation’s approach to measuring the performance of grantees and itself. It does not provide step-by-step instructions for how to conduct evaluations, but rather provides a blueprint for how the foundation views the role of evaluation, along with descriptions of the types of evaluations and the levels at which they are conducted (individual grant, grant cluster, foundation strategy, etc.), evidentiary standards, and processes for communicating results.
One of the most challenging aspects of the strategic philanthropy approach is determining how to measure the performance of grantees. While there are a number of approaches that can be taken, the key is that the process must ultimately be driven by quantitative, objective evidence where possible. In addition, it is important to note that the development of performance measures should be a collaborative process between the foundation and the grantee, with appropriate flexibility to allow for adjustments during the term of the grant.
At the end of a grant term, the grantee needs to have a way of reporting its performance back to the foundation. Here, the goal of the strategic philanthropist is to find the right balance between having enough information to make good decisions, without requiring so much from the grantee that it creates an undue burden. Part of the solution to this issue is that quality generally obviates the need for quantity when it comes to the performance data submitted by a grantee. A handful of rigorously measured indicators is worth more than a dozen that are based on subjective or poorly designed methods.
Once the grantee has submitted its performance reports, the foundation must have a way of incorporating other sources of evidence, synthesizing and summarizing the results and lessons learned, and then communicating findings to the rest of the foundation. This typically takes the form of a suite of products created and disseminated by evaluation staff. Grant and foundation performance evaluations should have a standard and consistent format. In addition, there should be a process for sharing results with the appropriate staff and board members as needed.
Lastly, as we have noted before, the key to strategic philanthropy is the creation of a feedback loop for incorporating evaluation findings into the foundation’s decision-making processes. This can take many forms, including dedicated space or time during new grant proposal reviews to discuss past performance, providing for evaluation staff to make an annual report to the foundation’s board on performance against strategic goals, or establishing regular reviews of significant grant clusters for senior staff.
Our Evaluation Framework
The Walton Family Foundation has three focus areas: the K-12 Education Reform Focus Area, the Environment Focus Area, and the Home Region Focus Area. Our Evaluation Unit has professional evaluators assigned to work on each.
Over the past four years, we have built an evaluation framework based on the strategic philanthropy approach to giving. In building our framework, we reviewed the evaluation approaches of several major foundations that have been leaders in this field, including the Kellogg Foundation, the Bill and Melinda Gates Foundation, the Hewlett Foundation, the Packard Foundation, and the Annie E. Casey Foundation. We also worked with and learned from consultants, such as BTW Informing Change and the Center for Evaluation Innovation.
Founded on best practices and tailored to fit the culture of our organization, we take a hierarchical approach to evaluation. In short, the foundation’s overarching strategy for each focus area consists of a small number of key initiatives, and each of those initiatives is sub-divided into strategies. Our goal is to conduct evaluations at each of those levels in each focus area—for individual grants, for each strategy, and for each initiative. In our K-12 education work, this can mean evaluating an individual grantee (such as the Louisiana Association of Public Charter Schools) that advocates for better charter school policies, an evaluation of higher-level indicators of improvements to charter school policies (by using the National Alliance for Public Charter Schools’ Charter School Law Ratings), and ultimately an evaluation of how those policies are impacting student enrollment and academic outcomes (e.g., by looking at improvements in state reported graduation rates).
Sign up for our free newsletters
Subscribe to NPQ's newsletters to have our top stories delivered directly to your inbox.
By signing up, you agree to our privacy policy and terms of use, and to receive messages from NPQ and our partners.
As part of our framework at WFF, we hold that evaluations should seek to use quantitative targets and analyses to the greatest extent possible. That said, we understand that at times it is necessary, and even desirable, to include qualitative measures, and foundations should do so when quantification is not possible. At grant level, we use a detailed performance measures process for each grantee that establishes outputs (what the grantee will do) and outcomes (how the world will look different as a result of the grantee’s actions). We have provided written and video guides to support applicants in the performance measure process. Grantees establish these measures at the beginning of the grant period, with WFF approval, and then report progress toward meeting them during and at the end of the grant.
As an example, the foundation provided support to the NewSchools Venture Fund’s Boston Charter School Replication Fund. The grant outputs were relatively uncomplicated. For example, NewSchools would provide funding for work at least twelve new charter schools in Boston that met its eligibility criteria by March 2015. Among the outcomes established for the grant, one is that for all charter high schools in networks supported by NewSchools, at least 90 percent of students who graduate will go on to attend a two- or four-year college, with at least two-thirds of those students enrolling in a four-year college.
We share the view of Fay Twersky and Karen Lindblom: “The essence of good evaluation involves some comparison—against expectations, over time, and across types of interventions, organizations, populations, or regions.” For us, grant level performance measures establish those expectations. At the end of the grant period, evaluation staff collect those grantee reports and write a short (often two-page) evaluation that highlights and summarizes the grantee’s performance against key metrics. These evaluations can also be supplemented by third-party information or other sources of evidence as appropriate.
At the strategy and initiative levels, we collect data on a small number of key indicators. Generally, these indicators are high-level metrics that either directly measure the status of an outcome the foundation is seeking to influence or the best proxy available. For example, the foundation is interested in creating high-quality K-12 schools in select cities across the country. One of our indicators is a measure of school quality based on student achievement growth models that we employ in those areas where the foundation focuses its education investments. All of these high-level metrics are collected in dashboards and reported to board and staff annually. These dashboards contain data across several years in the past as well as targets for the future.
The key to the proper selection of these dashboard metrics is their connection to the foundation’s theory of change. Ideally, a foundation will have metrics to track progress across each outcome in its theory, from those that are expected to occur sooner all the way to those that are expected to take much longer to appear. However, this will not always be possible. In some cases, data simply will not be available and the costs of collecting it too high. In others, as Julia Coffman and colleagues have cautioned, we are careful about including dashboard metrics that “focus on population-level metrics that make too big a leap between an individual foundation’s grantmaking and its attributable impact.” As an example, we have learned that while funding relatively small economic development projects in a Mississippi Delta community, such as a business incubator program, may be intended to contribute to an long-term initiative level goal of reducing the poverty rate, it is not beneficial to track annual changes in that rate in our dashboards.
How We Use Evidence to Inform Grantmaking and Strategy
At WFF, we have worked hard to develop the kind of feedback loops necessary to ensure that information garnered from evaluations is incorporated into the decision-making processes of the foundation. At the grant level, every final grantee performance report is reviewed by the program officer and by evaluation staff. If a new grant has been proposed, then an evaluation is generally conducted on past performance, written up in a standard template, and then placed alongside the new grant proposal. In addition, the proposal must address findings from the evaluation and describe how, moving forward, strengths will be leveraged and weaknesses improved.
A recent example at the foundation was a grant proposal that came forward for an organization that works to engage parents in advocating for higher quality schools in one of the cities where the foundation focuses its K-12 investments. Performance issues were identified in the evaluation of the organization’s prior grant. Specifically, even though most outputs around recruiting and training parents were met, the expected administrative policy changes had not been achieved. Using the findings from the grant evaluation, program staff worked with the organization during the application process for a new grant to craft a proposal that built on the organization’s strengths in mobilizing parents and sought to help the organization to be more successful in turning their work with parents into concrete administrative policy improvements.
To inform focus area decisions about the investment portfolio and strategy, we produce analyses at several levels. For grantee clusters, we produce evaluations that are based on individual performance reports and supplemented by third-party data. For broader foundation strategies and initiatives, we collect and report data dashboards containing key metrics tied to the strategic plan. Importantly, focus areas set aside dedicated time and space for evaluation staff to present updates on progress against goals across portfolios and strategies.
More broadly, evaluation information is provided to our board each year so they can make adjustments to the foundation’s broader strategy if needed. At least once per year, evaluation staff provide for the board a presentation on overall progress against key, high-level indicators that are connected to the goals established in the foundation’s strategic plan for each focus area. A series of dashboards containing metrics tied to every strategy and initiative with general trends and progress-to-date are provided in board meeting materials, along with brief summaries and highlights of results and lessons learned. These quantitative indicators are supplemented by qualitative contextual information and third-party research findings.
For example, we recently conducted focus groups in the Arkansas and Mississippi Delta to understand better why our quantitative targets were not showing the progress that the foundation had hoped for. We also researched and learned from reports on related philanthropic partnerships, such as the Hewlett Foundation’s community change work and the experiences described in The Aspen Institute’s Voices from the Field. Through this multi-method investigation, we learned that the comprehensive community change approach we and our partners were implementing was unlikely to be successful in these Delta communities. As a result, our program staff has recommended a dramatic shift in the approach to our Delta investments, such that community-level change is no longer the metric of success and investments in public safety and youth programs are replacing broader economic development efforts.
Lessons Learned
Creating a separate evaluation department was an important first step in building the foundation’s capacity to measure its impact. With input from the program staff and board, our executive director decided to create a small unit that would function both alongside program staff but also have independence from it. The members of the evaluation unit have both formal evaluation training and expertise and also content knowledge in the areas in which the foundation invests. We have learned that this structure is beneficial because it fosters both independence and collaboration.
We have adapted and improved our processes as we have implemented the framework, and we have learned some other important lessons.
- First, some grantees have a difficult time transitioning to a metrics-based grantmaking process. They can find the process frustrating, in some cases because they lack the internal capacity to set strong measures, and in others because they prefer the flexibility of grantmaking approaches that are not as focused on explicit and measurable performance reporting.
- Second, some program staff will also have difficulty transitioning to this new approach. In part, they may lack the skills to implement the framework well, and, in part, they may sympathize with those grantees that preferred the previous approach. In our experience, it has been essential for the evaluation unit staff to provide ongoing training and support to both grantees and program staff.
- Third, as a result of this more rigorous process, the foundation now has more objective evidence that some grantees are more effective than others. These assessments can create dissonance, when the actual performance of a grantee is lower than what one expects based on reputation or promotional materials. The foundation must have a culture that engages with both positive and negative results to improve grantmaking.
Conclusion
It is for these reasons and more that instituting a formal evaluation framework is unlikely to be easy. But, if a foundation’s board and senior leadership have a firm commitment to organizational learning and to empowering evaluation staff, moving to a strategic philanthropy approach can help a foundation generate information that can improve investment decisions and impact.
References
Brest, P. (2012, Spring). A Decade of Outcome-Oriented Philanthropy. Stanford Social Innovation Review.
Coffman, J., Beer, T., Patrizi, P., & Heid Thompson, E. (2013). Benchmarking Evaluation in Foundations: Do We Know What We Are Doing? Foundation Review.
Eisenberg, P. (2013, August). “Strategic Philanthropy” Shifts Too Much Power to Donors. Chronicle of Philanthropy.
Schambra, W. (2013, December). The Tyranny of Success: Nonprofits and Metrics. Nonprofit Quarterly.
Twerksy, F. & Lindblom, K. (2012). Evaluation Principles and Practices: An Internal Working Paper. The William and Flora Hewlett Foundation.