Scaling-small
Leigh Prather / Shutterstock.com

When a federated network decides to prioritize quality over scale, it’s not hard to foresee that toes will get stepped on and feelings will get hurt. When the dust has settled, you know that some affiliates will drop out of the network entirely, leaving fewer recipients able to get the services they so desperately need.

That’s an agonizing choice for any nonprofit, because we genuinely want to change the world, and that’s a big job that requires big numbers. Isn’t cutting back antithetical to the core mission? Why would any board or any funder support a strategy of doing less?

At Communities In Schools, we struggled with those questions as we went through a decade-long effort to emphasize quality over scale. As an organization that prevents dropouts and promotes graduation, we were always focused on results, but throughout the 1990s, size had become a proxy for effectiveness. Growth seemed to be a sign that we were doing something right; it said that the marketplace was validating our approach and asking us to do more.

Early in the 2000s, however, we started to wonder about other metrics for judging effectiveness. In particular, we began to ask questions about the individual child: Is she better off because of CIS? How do we know? What is the best set of measurements to determine her success?

We decided that we had to answer those questions before trying to grow any further. We were fully prepared to serve fewer students in the short term if it meant that all students would get the same high-quality services moving forward. After all, what did the CIS “brand” really mean if students, parents, principals, superintendents and school boards couldn’t count on consistent results across geographic boundaries?

In 2001, we brought in a firm called Caliber Associates—later acquired by ICF International—to conduct a longitudinal evaluation of our effectiveness. We were committed to becoming better and knew that a third-party validation was crucial to ground our learning in credible research. But we had two requirements: We wanted to be a partner in the study’s design, and we wanted to involve stakeholders throughout the process. That meant forming a National Evaluation Advisory Committee comprising not just the leaders in our national office, but also executive directors from the network, representatives from Atlantic Philanthropies, our foundation investor, and nationally recognized youth and education researchers.

We are a complex organization, and it required two years for us to test and tinker with the evaluation design. We didn’t want a study that was binary, because a simple effective/ineffective approach would not help us learn and improve. Instead, the evaluation was always intended to look for results along a continuum so that we could pinpoint exactly where we were being effective and where we were being less so.

By the time we formally launched the evaluation in 2005, our affiliates had been hearing about it for years, yet some continued to treat the whole thing as a minor nuisance that would soon go away. But exactly the opposite happened: Instead of “going away,” the evaluation led us to double down on quality. It proved that our core model was uniquely effective in reducing dropout rates and improving graduation rates among low-income students. But it also showed that our far-flung network wasn’t always executing that model with fidelity, and outcomes suffered as a result.

So we went to the network and said, “Half of you, according to our study, are not creating any measurable value or helping kids to graduate.” You can imagine the reception we got to that particular message, but it wasn’t meant as an indictment. It was more about soul-searching. We had to be certain that we were making a consistent, measurable difference in the lives of at-risk students. The kids and their families deserved nothing less. Our funders and staffers and partners deserved nothing less.

The CIS National Evaluation study helped us pinpoint our strengths and weaknesses, so the next step was to magnify the former and minimize the latter. Our vehicle for doing that was a technical assistance and accreditation program known as TQS, or Total Quality System. We launched TQS in 2008, forming a pact with our 217 affiliates that they would have until July 2015 to hit ambitious new benchmarks for both programming and business operations. There were no illusions about the initial response: We went into this operation expecting substantial organ rejection, and some affiliates quickly let us know that they had other priorities and would be transitioning out of the CIS network. Most of these were small- to midsized affiliates that ran a single initiative such as mentoring or after-school programs, and they simply weren’t interested in shifting to the more holistic and evidence-based model of integrated student supports.

But other affiliates looked at the data, looked at our goals, and said, “We get it. We see the need. We want to improve, but we just don’t have the resources to hit these benchmarks. This amounts to an unfunded mandate, and that’s not fair.”

At the national office, we heard that message, and we couldn’t really argue. Our affiliates were already spending all their money on programming—and rightly so—yet we were saying that they simultaneously needed to invest in quality improvement efforts to ensure that their programming was effective and sustainable.

Fortunately, just as we had a foundation partner to help us through the initial research phase, we also had several investors come forward to help fund our strategy for delivering quality and efficiency at scale. Those partnerships were crucial: Over the course of TQS, we poured $43 million into the network for direct costs such as data collection, technology, board training, and so forth. When you add in national investments for staffing, branding, policy work and so forth, it was easily a $50 million ticket.

We viewed these expenses as infrastructure investment in our network—and as any politician will tell you, there’s nothing sexy or popular about investing in infrastructure. Knowing that the same often holds true in the foundation world, we were incredibly grateful when the Robertson Foundation recognized the value of our efforts and put up the first $10 million to help drive quality throughout the network.

That vote of confidence helped to attract an unprecedented wave of investment from The Edna McConnell Clark Foundation, the Wallace Foundation, The Social Innovation Fund, The AT&T Foundation, and significant investments by the CIS National Board of Directors, among others. In a sector