October 2, 2018; Generocity
As every nonprofit organization has been told over and over again, collecting and communicating data about impact is absolutely critical. Donors, sponsors, partners, grantmakers, and government agencies all want to know if the nonprofit is making a difference. But as every nonprofit organization also knows, collecting that data and communicating data-driven evaluation is not easy, and it is often an added burden to an already overloaded staff.
Stepping into that void is an initiative called ImpactED out of the University of Pennsylvania in Philadelphia. Though the group started out by looking at and promoting metrics for evaluation in schools, it’s broadened its scope to include more social impact and nonprofit organizations. In particular, it has partnered with the William Penn Foundation on a new capacity-building initiative called the Social Impact Collaborative.
An article in Generocity by Julie Zeglan describes how this program is evolving as a second cohort group has been launched. The goal of this project is to help nonprofits that receive funding from the Penn Foundation develop and internalize methods to collect data and use it effectively. The first cohort group finished a year of study and ImpactED has testimonials indicating at least some positive results.
However, the second cohort group did require some changes. Participants in the first cohort found it difficult to implement the lessons learned across the whole organization, and if one of the members of the team left, the air came out of the balloon and it was difficult to sustain the effort. To address this, the program has been extended to a second year; the teams have been increased from two to three and must include a senior level staff member; and sustainability is included in all of the discussions, not added as a standalone, remotely-offered learning opportunity.
Sign up for our free newsletters
Subscribe to NPQ's newsletters to have our top stories delivered directly to your inbox.
By signing up, you agree to our privacy policy and terms of use, and to receive messages from NPQ and our partners.
As it is now designed, the program includes a mixture of learning opportunities in the form of daylong workshops, technical assistance, and coaching. Most of the learning opportunities are in the first year, with technical assistance spread across both years and the second year focusing on assistance and coaching on how to make evaluation organization-wide and sustainable.
The Penn Foundation, as chief sponsor of the SIC, is a Philadelphia-based foundation that focuses its giving on education, the arts, and protecting the Delaware River watershed. The organizations participating in the second year of the SIC program include organizations in the arts (Village of Arts and Humanities and Fleisher Art Memorial, for example) and neighborhood revitalization (including Community Design and Cooper’s Ferry Collaborative). One organization provides shelter for families that are homeless, which seems slightly out of the foundation’s wheelhouse of giving. The size of the organizations ranges from $586,000 on the low end to over $40 million on the high end. Most are in the $1.5 – $5 million range.
Another article in Generocity by ImpactED’s founder shares lessons learned about data collection and evaluation from the first cohort group, and these are valuable for every nonprofit:
- Make evaluation purposeful. Yes, you will have to gather the data your funders want, but make sure to also collect information that is meaningful to your goals and helps guide decisions about strategic directions.
- Make evaluation feasible. If the process of collecting data is complex and cumbersome, it will take too much time and resources and probably isn’t sustainable. Develop methods that you can actually do.
- Make evaluation sustainable. Data for data’s sake is not helpful. It needs to be used in the context of an organization-wide culture of evaluation and strategic thinking, and leadership that is not afraid of what the results of the evaluation might be.
It is encouraging to find a funder that is putting its money where its mouth is and helping nonprofits develop the capacity to generate the information the funder wants. The Pew Fund for Health and Human Services program is investing in a similar initiative also led by ImpactED.
This seems to be all about collecting empirical data. But is that always the best, only, or even a meaningful metric? We wonder, for example, if the true impact of a leadership development initiative or of an arts program can be measured empirically. Counting how many people attend an arts program—or, to use the flippant phrase, “butts in seats”—is a very strong empirical measure. But it does not measure the impact on the people in those seats, how they have been affected by the art. It will be very interesting to see what the arts organizations in SIC’s second cohort group come up with.—Rob Meiksins