This article is from the Nonprofit Quarterly’s fall 2017 edition, “The Changing Skyline of U.S. Giving.”

The MacArthur Foundation’s 100&Change competition began as an experiment in openness, in response to criticisms that the philanthropic sector is too insular, not sufficiently focused on impact, and too risk averse. Instead of an introspective process, by which we would decide on an issue or problem as the focus and then design the strategy, we decided to issue a public, open call: “Tell us what problems $100 million can solve, and how.”

We proposed a $100 million grant—large by any standards—to be awarded by a competitive process and to be used over a compressed period, because we believe that there are some categories of problems that can be solved if they receive this kind of focused attention and resources at scale with need. Conversely, there are some problems where that may be less likely to work. (Different problems require different approaches.) We see the value in a diversified portfolio of grantmaking—responsive, strategic, and even “speculative” (the MacArthur Fellows Program invests specifically in individual potential). We also see the value in a diversified portfolio of risk. A single grant of $100 million is admittedly a very risky proposition—but, as our president Julia Stasch has said, “philanthropy is best positioned to provide society’s ‘risk capital’.”1

But we wanted to take a risk that was carefully informed and respectful of the large investment. MacArthur spent two years designing 100&Change. We researched and investigated different competition models. We grappled with tough choices around the structure and are still learning what worked well and what could stand to be improved. Those challenges included

  • how to balance risk and evidence;
  • how to evaluate diverse proposals;
  • how to create a value proposition for all participants;
  • how best to ensure engagement with communities of interest—those that stand to benefit and lose; and
  • how to curate content for other funders interested in supporting proposals.

As we narrow our semifinalists to up to five finalists this fall and name a winner in the following months, we want to share the many lessons we have learned through our approach to giving away $100 million, and we want to share the data we have gathered—a rich repository of creative, thoughtful, and impactful ideas. (Editors’ Note: Four finalists have since been chosen; you can read about them here.)

Balancing Risks and Evidence

Our intentions were clear from the start: we wanted to solve a problem. And more than that: we wanted to inspire the broader public to believe change can happen and solutions to major challenges are possible, despite the current political and social climate.2

We started by investigating different models. We looked at a point solution prize, where a specific goal or target is defined and a monetary prize is offered to those who best achieve it. We considered challenges, where a problem is defined and support is offered to those who are looking for the solution. Both approaches would have required that we define a specific problem that we wanted to solve, hindering our effort not to impose our own views as to what problems are most compelling, and both presume that the solution to the problem is unknown. We believe that there are problems where solutions are known but there is just not enough money available to effect the solution.

In philanthropy, there is a tendency to want to be the first to fund an idea, project, or breakthrough innovation. MacArthur was not seeking to occupy that space. We perceived a gap in the philanthropic field: a need for funding to take tested ideas to scale. We saw 100&Change as a way to help address that gap.

By the time the application period closed for 100&Change, in October 2016, we had received 1,904 applications. MacArthur staff reviewed each submission to ensure it complied with the application requirements.3 Although we believed at the time that we had communicated our eligibility criteria clearly, we discovered that some criteria needed clearer description.

For example, even though we had described 100&Change as a competition for a $100 million grant, we received 463 proposals for projects with budgets well below $100 million. During the next round of 100&Change we will state, unequivocally, that we are looking for $100 million projects.

We opened the competition to for-profit organizations but should have provided more guidance regarding the concept of charitable purpose and the limitations on the use of grant funds to generate profit or other private benefits. Many of the for-profit entries were disqualified in administrative review for not meeting these requirements.

A Panel of Wise Heads

Our insistence on openness also constrained our choices about how to evaluate proposals. If we had limited ourselves to a specific domain of work, we could have employed a panel of specialists—a group of experts in that domain. However, it was impractical to convene multiple panels of experts across different fields in anticipation of what might be submitted to the competition. And, as our semifinalists illustrate, we received a diverse pool of submissions.4

Another option would have been a crowdsourcing model. There is wisdom in inviting people to propose which problems they would solve and having a crowd assess, through open voting, whether that problem is meaningful or compelling. But we did not want 100&Change to turn into a popularity contest, creating a competitive disadvantage for some proposals. We worried that open voting might favor emotional appeal over effectiveness.

We realized that crowds provide a way to take more risks, innovate, and think outside the box. We also understood that the wisdom of experts is important. So, that is how we landed on what we called “a panel of wise heads”—an evaluation panel of judges that included more than four hundred thinkers, visionaries, and experts in fields such as education, public health, impact investing, technology, the sciences, the arts, and human rights.

To remain open, we had to define selection criteria that were agnostic with respect to field of work. We arrived at four: meaningful, verifiable, feasible, and durable.5

The first, “meaningful,” was the goal of the competition: tackle a significant problem that matters. We knew going in that there were many problems that $100 million could not solve, and we were comfortable with people addressing a slice of a problem—but it needed to be a compelling slice. Our intent was to define meaningfulness broadly; however, we probably should have been clearer. The solution did not require a global impact or have to impact a large number of people to meet our standard of meaningfulness. It could also include a solution to a serious, devastating problem for a well-defined number of people or a single geography. Yet, our preliminary analysis suggests that our evaluation panel defined meaningfulness narrowly. Of the two hundred top-scoring proposals, just four focused on a single local geography or population.

Evidence that a given proposal worked—had worked at least once, somewhere, and on some scale—was important to us. We wanted to mitigate against the risk of picking a proposal that was completely untested or untried. Hence, our second evaluation criterion: “verifiable.” We required applicants to provide rigorous evidence that their proposed solution would effectively address the problem they identified. Compelling evidence could include

  • data from an external evaluation pilot project or experimental study;
  • citations in peer-reviewed research indicating a strong scientific consensus; and
  • documentation of a detailed pathway from the proposed actions to specific outcomes.

Our third criterion was “feasible.” The criterion “verifiable” asked if a solution would work if implemented. The criterion “feasible” asked if the solution could be implemented by the team proposing it. Does the team have the right expertise, capacity, and skills to deliver the proposed solution? Do the budget and project plan line up with realistic costs and tasks? Are there political or other obstacles to successful implementation?

The last criterion, “durable,” was one of the most challenging for many participants. If we were focused on solving a problem, we did not want the solution to be temporary and transitory. We wanted whatever we chose to have long-term impact.

We thought about durability in a few ways. The first is that $100 million can fix a problem forever. Once you fixed it, you have no need to address it again. Among the eight semifinalists, The Carter Center’s proposal to eliminate river blindness in Nigeria is closest to this idea. The second is that $100 million may set up the infrastructure required so the ongoing marginal cost is very low and there is an identifiable revenue stream to cover it. An example: the Himalayan Cataract Project proposes creating a vision care infrastructure that will deliver care at low marginal cost into the future. And the third is that $100 million may allow you to unlock resources and identify others who will commit to funding the project over the longer haul. Catholic Relief Services proposes to use the $100 million to demonstrate to both private philanthropy and governments the advantages of funding family-based care over institutional care for children.

We asked a few questions of applicants: If this is going to cost more than $100 million, how much more, and how do you plan to fund it? What are the long-term ongoing costs, and what is your plan to cover them? Many applicants either ignored the sustainability question or provided vague answers, making it a challenge for the judges to assess the durability criterion. Out of all the criteria scored by the judges, durability had the lowest median score.

While 100&Change was open to problems from any domain or field, the four evaluation criteria implicitly restricted the types of problems and solutions that would be competitive. There are likely many cases among the submitted proposals where applicants addressed a significant problem and proposed a solution likely to yield significant social benefits. Yet, in some cases, proposals addressing a significant problem did not yield a high score on the 100&Change rubric because the project would require ongoing philanthropic dollars or lacked a persuasive body of evidence to prove it would work. These projects might benefit from a different kind of philanthropic investment.

The Value Proposition for Participants

The success of 100&Change depended on attracting high-quality participants. Although we did not ask participants to invest in implementation in advance of the grant, we did ask for significant investment in the application process. We asked for detailed descriptions of the project, financial statements, evaluation and learning plans, memoranda of understanding with all partners, and a ninety-second overview video. Organizations had four months to pull their applications together. We realize this is a significant ask, and during the next round of 100&Change we intend to provide potential applicants with more lead time to put their proposals together.

Recognizing that the time and other resource costs of the application process were not trivial, we wanted to create a value proposition for all participants. All applicants whose proposals were evaluated have received comments and feedback from judges. That feedback might help to strengthen the rejected proposals for future funding requests or even the next cycle of 100&Change, which we intend to repeat every three years. We have heard from some applicants who are already using the feedback to refine their proposals, potentially proceeding to implement their projects even without our funding.

Independent of specific feedback from judges, we’ve heard stories that the competition sparked conversations about what might be possible with a large amount of funding. At Arizona State University, 100&Change served as the impetus for new teams and partnerships to form and for existing teams to reach further and reimagine how an idea can scale and be transformative.6 At the University of Massachusetts Boston, the competition was the catalyst to think bigger and more boldly about its scope of impact.7 The university encouraged teams that submitted proposals “to develop, deepen, refine, and create our proposals collectively, with community partners.”8

Planning for Scale and Engaging with Communities of Interest

MacArthur is committed to making each of the eight semifinalists’ projects as strong as possible—providing support to help them refine and think through how they would expand, adapt, and sustain successful projects in a geographic space, over time, to reach a greater number of people. We enlisted the outside firm Management Systems International (MSI) to help the semifinalists address technical and organizational capacity challenges and demonstrate authentic engagement with communities of interest.9 We defined communities of interest as targeted beneficiaries, policy-makers, others who work in the same space, and those who stand to lose political power or influence, social status, economic resources, or demand for their products or services if the proposed solution is implemented. These engagements have taken many forms—blog posts, community meetings with potential beneficiaries, and live digital interactions such as Facebook Live and reddit AMA—and they have served multiple purposes. Semifinalists have revised strategies based on information learned through these engagements, identified new collaborators and partners, and attracted new resources—both financial and in kind.10

The Foundation also asked each semifinalist to address issues of equity and inclusion. We asked that each team describe how it would ensure inclusion of marginalized populations, recognizing that the definition of marginalized populations would depend on the specific context of the work. The Internet Archive, whose targeted beneficiaries are primarily based in the U.S. and Canada, responded to this question by emphasizing the curation of a digital collection as diverse as the population of readers through a transparent, inclusive selection process. HarvestPlus described efforts to include internally displaced persons in Nigeria and refugees in Uganda. We enlisted Mobility International USA and Access Living to provide specific advice on the inclusion of persons with disabilities (Access Living adapted a checklist for the 100&Change competition for each semifinalist to conduct a self-evaluation). We also required that each team explain how it would use gender analysis, including disaggregated data, in the planning, implementation, and evaluation phases of the project.

Curation and Promotion

When we launched 100&Change, we did not foresee that we would be creating a rich repository of creative, thoughtful, and impactful ideas. Yet, other funders did. Within weeks of the announcement, we started receiving requests to share proposals—and we are.

We developed an interactive map featuring the top two hundred proposals that received the highest scores from our evaluation panel.11 It shows where the proposed projects take place and demonstrates their collective global reach. And we created a public searchable database with summaries of the nearly two thousand projects submitted that embody big ideas of value to both the philanthropic community and the broader public.12

To be responsible stewards of this public good, we are making full proposals available to other funders who have expressed an interest in supporting a project. We are cultivating donors who might want to fund other proposals, including those that might benefit from smaller initial investments. We are also engaging the research community—which might glean valuable insights about nonprofits and for-profits—and academic and nonacademic institutions. And we have our own list of research questions to inform the next iteration of 100&Change:

  • What specific fields or organization types were at a disadvantage in the competition and why?
  • Are there patterns in the types of solutions proposed in specific fields?
  • What are the financial and capacity needs of the problem-solving community?

There are certainly many other exciting questions to explore, and we welcome research interest.

Interest in 100&Change has exceeded our expectations. It has become more than a competition to select a project to receive a $100 million grant. 100&Change is also a mechanism to canvass the globe for problems that require big solutions, and a platform for sharing those big ideas with the philanthropic community. We hope that 100&Change has inspired others to believe that change can happen and solutions are possible.


  1. Julia M. Stasch, “Taking Risk and Requiring Evidence,” 100&Change, MacArthur Foundation, January 6, 2017.
  2. Julia M. Stasch, “Solutions Are Possible: Post-Election Poll Indicates,” 100&Change, MacArthur Foundation, December 12, 2016.
  3. Application: Propose a problem and its solution. Ideas can come from any field or sector,” 100&Change, 2016.
  4. See “Meet the Semi-finalists,” 100&Change, 2016.
  5. Cecilia A. Conrad, “A Panel of Wise Heads,” 100&Change, MacArthur Foundation, January 23, 2017; and see “Scoring Process: Understand the scoring,” 100&Change, 2016.
  6. Steven J. Tepper, “100&Change: A Clarion Call for Arizona State University’s Researchers,” 100&Change, MacArthur Foundation, May 17, 2017.
  7. Anne Douglass and Paul Kirshen, “Taking the Moonshots for Climate Justice and Early Childhood Education,” 100&Change, MacArthur Foundation, June 29, 2017.
  8. Ibid.
  9. Scaling Up Development Innovations and Social Outcomes,” Management Systems International, 2017.
  10. Theresa Mkandawire, “To Be Human is to Collaborate,” 100&Change, MacArthur Foundation, July 13, 2017.
  11. Meet the Top 200,” 100&Change, 2016.
  12. Explore All Submissions: Be inspired to think big and bring about meaningful change,” 100&Change, 2016.