When a federated network decides to prioritize quality over scale, it’s not hard to foresee that toes will get stepped on and feelings will get hurt. When the dust has settled, you know that some affiliates will drop out of the network entirely, leaving fewer recipients able to get the services they so desperately need.
That’s an agonizing choice for any nonprofit, because we genuinely want to change the world, and that’s a big job that requires big numbers. Isn’t cutting back antithetical to the core mission? Why would any board or any funder support a strategy of doing less?
At Communities In Schools, we struggled with those questions as we went through a decade-long effort to emphasize quality over scale. As an organization that prevents dropouts and promotes graduation, we were always focused on results, but throughout the 1990s, size had become a proxy for effectiveness. Growth seemed to be a sign that we were doing something right; it said that the marketplace was validating our approach and asking us to do more.
Early in the 2000s, however, we started to wonder about other metrics for judging effectiveness. In particular, we began to ask questions about the individual child: Is she better off because of CIS? How do we know? What is the best set of measurements to determine her success?
We decided that we had to answer those questions before trying to grow any further. We were fully prepared to serve fewer students in the short term if it meant that all students would get the same high-quality services moving forward. After all, what did the CIS “brand” really mean if students, parents, principals, superintendents and school boards couldn’t count on consistent results across geographic boundaries?
In 2001, we brought in a firm called Caliber Associates—later acquired by ICF International—to conduct a longitudinal evaluation of our effectiveness. We were committed to becoming better and knew that a third-party validation was crucial to ground our learning in credible research. But we had two requirements: We wanted to be a partner in the study’s design, and we wanted to involve stakeholders throughout the process. That meant forming a National Evaluation Advisory Committee comprising not just the leaders in our national office, but also executive directors from the network, representatives from Atlantic Philanthropies, our foundation investor, and nationally recognized youth and education researchers.
We are a complex organization, and it required two years for us to test and tinker with the evaluation design. We didn’t want a study that was binary, because a simple effective/ineffective approach would not help us learn and improve. Instead, the evaluation was always intended to look for results along a continuum so that we could pinpoint exactly where we were being effective and where we were being less so.
By the time we formally launched the evaluation in 2005, our affiliates had been hearing about it for years, yet some continued to treat the whole thing as a minor nuisance that would soon go away. But exactly the opposite happened: Instead of “going away,” the evaluation led us to double down on quality. It proved that our core model was uniquely effective in reducing dropout rates and improving graduation rates among low-income students. But it also showed that our far-flung network wasn’t always executing that model with fidelity, and outcomes suffered as a result.
So we went to the network and said, “Half of you, according to our study, are not creating any measurable value or helping kids to graduate.” You can imagine the reception we got to that particular message, but it wasn’t meant as an indictment. It was more about soul-searching. We had to be certain that we were making a consistent, measurable difference in the lives of at-risk students. The kids and their families deserved nothing less. Our funders and staffers and partners deserved nothing less.
The CIS National Evaluation study helped us pinpoint our strengths and weaknesses, so the next step was to magnify the former and minimize the latter. Our vehicle for doing that was a technical assistance and accreditation program known as TQS, or Total Quality System. We launched TQS in 2008, forming a pact with our 217 affiliates that they would have until July 2015 to hit ambitious new benchmarks for both programming and business operations. There were no illusions about the initial response: We went into this operation expecting substantial organ rejection, and some affiliates quickly let us know that they had other priorities and would be transitioning out of the CIS network. Most of these were small- to midsized affiliates that ran a single initiative such as mentoring or after-school programs, and they simply weren’t interested in shifting to the more holistic and evidence-based model of integrated student supports.
But other affiliates looked at the data, looked at our goals, and said, “We get it. We see the need. We want to improve, but we just don’t have the resources to hit these benchmarks. This amounts to an unfunded mandate, and that’s not fair.”
At the national office, we heard that message, and we couldn’t really argue. Our affiliates were already spending all their money on programming—and rightly so—yet we were saying that they simultaneously needed to invest in quality improvement efforts to ensure that their programming was effective and sustainable.
Fortunately, just as we had a foundation partner to help us through the initial research phase, we also had several investors come forward to help fund our strategy for delivering quality and efficiency at scale. Those partnerships were crucial: Over the course of TQS, we poured $43 million into the network for direct costs such as data collection, technology, board training, and so forth. When you add in national investments for staffing, branding, policy work and so forth, it was easily a $50 million ticket.
Sign up for our free newsletters
Subscribe to NPQ's newsletters to have our top stories delivered directly to your inbox.
By signing up, you agree to our privacy policy and terms of use, and to receive messages from NPQ and our partners.
We viewed these expenses as infrastructure investment in our network—and as any politician will tell you, there’s nothing sexy or popular about investing in infrastructure. Knowing that the same often holds true in the foundation world, we were incredibly grateful when the Robertson Foundation recognized the value of our efforts and put up the first $10 million to help drive quality throughout the network.
That vote of confidence helped to attract an unprecedented wave of investment from The Edna McConnell Clark Foundation, the Wallace Foundation, The Social Innovation Fund, The AT&T Foundation, and significant investments by the CIS National Board of Directors, among others. In a sector that often seems addicted to innovation, we were fortunate to find so many investors willing to take on the hard, unsexy work of improving implementation at a significant scale.
So, what kind of return were we able to offer those investors? After seven years of hard work, our entire legacy network is now 100 percent accredited under TQS standards, which are widely regarded as some of the most demanding in the industry. Admittedly, however, the network is not what it used to be. Nearly 40 percent of our affiliates that started the TQS journey dropped out somewhere along the way. Some had been a part of CIS for many years, so it was hard to see them go, but we made the decision that the CIS brand had to be about outcomes before organization. More than two dozen new affiliates have signed on since TQS began, but our overall footprint is still about 25 percent smaller than it was in 2008.
That kind of mass defection might seem like a fatal blow, but we found just the opposite to be true.
When we embarked upon TQS, our 217 affiliates were serving 1.24 million students. Seven years later, we have 165 affiliates serving nearly 1.5 million students. We sacrificed scale in terms of geographic reach, but not in terms of the number of students’ lives that we touch. The entire TQS process began with a focus on the individual child, so for us, the happiest result of all has been our ability to serve more students despite—or perhaps because of—our smaller footprint.
(As far as we know, no affiliate that left the CIS network ceased to exist entirely. Local affiliates were always constituted as independent nonprofits, with their own boards and funding streams. Departed affiliates continued to do the work they were always doing, so no children were left un-served. Though we would argue that locations without an accredited affiliate are now under-served, those program directors would probably disagree.)
We always believed that quality and efficiency would move in sync, and the results have borne that out. In the baseline longitudinal study, one of our important questions was, “How much does this cost?” We knew that you could throw a lot of money at a very small target and achieve guaranteed results, but that’s not a sustainable model for solving a very large social problem. So we insisted on efficiency as an important component of quality—pouring the least amount of money into the most effective strategies in order to achieve the best chance at scale.
Serving 20 percent more students with roughly 25 percent less organizational overhead suggests that we can offer the kind of return on investment that would be attractive to any funder, public or private. But improving our ROI required a willingness to shrink rather than grow, and that can be a tough sell in the nonprofit world.
If efficiency was a part of our quality definition, effectiveness was surely the larger part. From the beginning, we were driven to quantify exactly how much benefit at-risk youth could expect to derive from integrated student supports (otherwise known as ISS, the “secret sauce” of our particular community school model). As a dropout prevention and graduation promotion organization, we regard high school graduation as our ultimate metric, but we designed our baseline longitudinal study to measure additional variables as well, including attendance and grade-level promotion, particularly since we work across the entire K-12 space.
As we neared the end of the TQS process, we retained MDRC to conduct a randomized control trial study looking at many of the same metrics we were trying to measure back in 2005. The full results of this most recent third-party evaluation won’t be available until early next year, but we do know that TQS has made a large, measurable difference in outcomes. Our focus on quality implementation of integrated student supports has yielded 35 percent more graduates than what we were on track to achieve prior to the adoption of TQS, and graduation rates, dropout rates, and attendance have all shown notable improvement. Meanwhile, according to an economic impact study from EMSI, our ROI increased by nearly 20 percent.
Of course, money is an important driver of outcomes, and we did worry early in the TQS process that a shrinking network would mean shrinking revenue. But as we got leaner and better at what we do, a funny thing happened: Improved quality attracted greater investment in our affiliates, which are constituted as independent, local 501(c)(3) organizations. Last year, combined network revenues totaled $199 million—essentially unchanged from 2008, when we launched TQS. But the network shrank by a quarter over that same time period, which means our remaining, high-performing affiliates are getting a bigger slice of the pie, on average.
We’ve thrown out a lot of numbers in making the case for TQS, but here’s the bottom line: More students are experiencing better outcomes from a leaner organization working with the same budget. We think that’s a pretty compelling argument for emphasizing quality as you scale.
Communities In Schools operates in the education space, but our experience with TQS could apply to any nonprofit that’s organized as a federation or network. It’s all about asking the right questions, choosing the right metrics, justifying your ROI, following the data wherever they might lead you—and finding philanthropic partners who value quality implementation above growth or innovation. Change is never easy, and relationships are notoriously tricky in any network, but sometimes doing the hard thing is the right thing for those who depend on your services.