D.A.R.E. Chevy Camaro RS Police Car.” Credit: Kurt Clark

Editors’ Note: The following article was adapted from “Evaluation Research and Institutional Pressures: Challenges in Public-Nonprofit Contracting” by Frumkin and Reingold. The theoretical and political issues raised in this adaptation are explored in greater depth in the full paper, available online.

This story, reprinted from the Fall 2004 edition of the Nonprofit Quarterly, was first published online on September 21, 2004.

Public management and program evaluation should be natural allies in the quest for greater effectiveness. Armed with good data, government agencies could be in position to focus resources and attention on programs that have demonstrated the greatest impact. In too many cases, however, the link between solid evidence of program impact and government efforts to grow and replicate programs has been weak or missing.

Over the past three decades, connecting evaluation research to public decision-making has been transformed by the rise of service contracting, shifting responsibility for the delivery of public programs to non-governmental organizations. Many responsibilities have been pushed down to more local levels of government through devolution and out to nonprofit service providers through privatization. As this movement “down and out” has swept through government, the task of collecting and acting on evaluation data has changed. Rather than focus on government exclusively, researchers are examining new models of service delivery that increasingly use nonprofit organizations as prime vehicles for implementation.
Evaluation research that is focused on the performance of outside contractors can help public officials make difficult contracting decisions more soundly, learn about programs implemented in other areas, and avoid funding efforts that others have learned do not work.

In principle, this vision of how evaluation research can be used to improve public management and contracting appears sound and reasonable. Evidence that a particular initiative or program has positive effects—and “works”—can and should fuel replication and expansion, while evidence that an intervention does not achieve its intended objectives should lead to its abandonment. But does it?
In practice, public managers have had trouble replicating what works and, in at least a few notable cases that are examined here, have devoted large amounts of public funds into programs that evaluation research has shown do not work or for which there is no evidence of either success or failure.

In investigating this topic, it might be tempting to single out and blame a few individual managers for exercising poor judgment or failing to keep abreast of developments. Instead, this article uses three examples to lay out a framework for understanding what institutional and political forces lead rational managers to spend scarce program funds on efforts that have a very low likelihood of success.

Drug Abuse Resistance Education (DARE)

DARE is the nation’s largest school-based drug prevention program, designed to prevent substance abuse by exposing fifth and sixth graders to the dangers of drug and alcohol use through a curriculum delivered by police officers, teachers, and parents. In DARE’s initial year (1983–84), 10 police officers taught the curriculum to approximately 8,000 students in 50 elementary schools. Over the past 17 years, DARE has been expanded and implemented in over 8,000 cities across the nation, reaching 35 million students a year in more than 80 percent of U.S. school districts.

DARE was developed as a pilot program by the Los Angeles Police Department (LAPD) after the emergence of crack cocaine. Given the strong tendency of crack users to engage in criminal activity to support their habit, the LAPD began searching for strategies that would prevent drug use and subsequent criminal activity. DARE’s curricula is designed to teach students to recognize pressures to use drugs from peers and from the media, teach students the skills to resist peer inducements to use drugs, enhance students’ self esteem, teach positive alternatives to substance use, and increase students’ interpersonal communication and decision-making skills. Across the nation, DARE’s classroom material is standardized in 17 hour-long weekly sessions and does not vary substantially from school to school. DARE police officers are primarily responsible for delivering the instruction with the assistance of teachers, presenting facts and leading group discussions, role-playing, and workbook exercises.

DARE grew rapidly after the Drug-Free Schools and Communities Act (DFSCA) of 1986 allocated $500 million per year to states, schools, and communities for comprehensive drug prevention programs. Eligibility required a “comprehensive” drug prevention program, and many school systems turned to DARE to meet these eligibility criteria. It is important to note that local support for DARE occurred at the same time this program was endorsed by numerous federal government agencies involved in the support of schools and the war on drugs, including the Department of the Interior, the Bureau of Indian Affairs, the National Park Service, and the Department of Defense. These government endorsements created immense coercive pressure on local school districts to adopt the DARE program.

Although DARE had only been in operation for a short time, it was quickly labeled a model in the field of drug-use prevention, even though there was no valid and reliable evidence to suggest this perception was accurate. Local officials interested in accessing federal money were quick to identify existing programs that would allow them to access additional resources. DARE was visible, well marketed, and tacitly endorsed by “experts” in the field and government agencies. Moreover, DARE soon became identified as an important component of the war on drugs and the “Just Say No” campaign. This linkage was fostered as a constellation of political and civic leaders, as well as educational and criminal justice professionals, sought strategies that would symbolically endorse the drug war while aligning these various spheres of public life with this campaign. In short, support for DARE became support for the drug war, and conversely opposition to DARE represented opposition—or at least lack of commitment—to the drug war. This intense normative pressure, combined with the strong arm of government endorsement, put DARE on a fast track for widespread diffusion. School districts and local police departments across the nation were quick to copy the Los Angeles model. During the early years, skepticism about the program’s impact was rare. Local officials acted on blind faith, unfolding a standardized DARE program in community after community—with little or no attention to whether drug use was even a problem in a particular community.

Today, DARE is centrally administered. DARE America, a not-for-profit organization, is responsible for implementing and managing the program at the national level. This organization is assisted by an advisory board of experts and advocates in the field of substance abuse prevention. DARE’s organizational infrastructure also includes state-level commissions that are responsible for coordinating and promoting substance abuse prevention programs within each state. Across the nation, DARE has received widespread public support by forging close partnerships between local schools, law enforcement, and the nonprofit sector. These partnerships have fostered a web of organizations and institutions across the nation around the issue of drug use prevention. For these actors, DARE provides financial resources, media attention, and legitimacy as allies in the war on drugs. In short, DARE and its administrative apparatus have embedded once disparate actors from various parts of the social structure into a complex network of organizations that have an interest in the continued operation of this program.

At the time DARE was replicated across the country, no valid empirical evidence existed that tested the program’s impact. In some ways, local political decision-makers may not have really cared whether DARE worked. It was more important to implement DARE, sending a signal to government funding agencies, parents, and other interested parties that schools and their administrators were allies in the drug war. Demonstrating this shared normative belief by replicating DARE was perhaps more important to local political decision-makers than whether or not DARE actually reduced drug use. Since its initial implementation and widespread adoption across the nation, a substantial research literature has emerged assessing DARE’s impact. Overall, evaluations of DARE have consistently found the program to be ineffective at reducing drug use. Perhaps most troubling are recent findings that suggest DARE may increase drug use among participants (Rosenbaum et al., 1998). By talking about the dangers of drug use, the study found that DARE may have inadvertently planted the idea of rebellious experimentation in the minds of some youth, glamorizing drug use by making it appear dangerous and forbidden.

The mounting evidence that DARE does not work has prompted a few police departments (Louisville, Kentucky; Boulder, Colorado; and Salem, Oregon) and city governments (Oakland, California; Omaha, Nebraska; Spokane, Washington; and Fayetteville, North Carolina) to withdraw support for this program. For some of these officials the decision was not easy, alluding to the program’s “strong community support” (Sebastian, 1998).

However, the number of cities and schools to drop DARE remains very small, and DARE is still the largest drug prevention program in the nation. Signifying the normative symbolic pressures associated with this program, DARE officials have responded to the mounting evidence that their program does not work by attacking the motives of researchers, suggesting that their work is linked to a broader political agenda to legalize drugs, while embracing anecdotal testimony of participants, teachers, parents, and police officers who believe DARE is an effective program. This strategy worked until a number of high-profile advocates of the DARE program began publishing research documenting the program’s shortcomings.
However, the normative and coercive pressures to maintain the status quo are strong. To date, school corporations are not willing to terminate DARE programs for fear of being labeled advocates of drug legalization by parents, local law enforcement officials, DARE America, and state DARE commissions. This fear has been fostered by anti-drug and anti-crime advocacy organizations that have rallied in support of DARE, accusing any school corporation that drops the program of supporting drug legalization. Similarly, school corporations do not want to forgo the public financial resources that are made available to their organizations by participating in this program. While the total amount of new money made available to a school for a DARE program is small, continued participation in DARE is used as a signal by other public and nonprofit funding agencies when considering the allocation of additional resources. Together, these institutional pressures have prevented a coherent and systematic response by local school districts to the ever-widening scope of evaluation research findings that DARE is not effective.

Individual Development Accounts (IDAs)

IDAs are a policy tool designed to help low-income workers and households build assets and achieve upward social and economic mobility. IDAs are matched savings accounts, similar to Individual Retirement Accounts but typically restricted to those below the federal poverty line, for post-secondary education and training, business capitalization, or a down payment on the purchase of a home. For many programs, a three-to-one match is made—for every dollar saved by an IDA participant, a match of three dollars is deposited.

IDAs were first conceived in the late 1980s and early 1990s as an alternative to public welfare programs that increase short-term income but have no impact on long-term wealth accumulation. Since wealth (not income) appears to be a significant factor in explaining variation in life-chances, IDAs were thought to be an effective means of encouraging savings and subsequent wealth creation by subsidizing the savings of the poor. The concept of IDAs is closely aligned with several aspects of the American belief system that emphasize savings, education, homeownership, and entrepreneurship. IDAs possess an appealing moral dimension in that they are perceived to encourage participants to engage in behavior that is in keeping with cherished ideals of economic self-sufficiency and personal uplift via hard work. Advocates of IDAs emphasize the importance of this policy tool as a device that can make the “American dream” a reality for all citizens, including the poorest of the poor. Hence, the normative pressures on policy-makers, including important philanthropic and nonprofit sector actors, to support IDAs are powerful. In essence, support for this policy tool is equivalent to support of the American dream that everyone has a chance to be a success.

Initial adoption of IDAs began piecemeal after small-scale IDA programs operated by nonprofit social service providers were able to convince large philanthropic organizations, such as the Ford Foundation, to begin funding, with part of the grant funds used to match participants’ savings. As the initial IDA programs were publicized, other philanthropic organizations encouraged grant applicants to propose similar initiatives. Within a short period of time, many of the nation’s 3,500 community development corporations and countless other social service agencies were responding to requests for proposals and new grant initiatives that sought to replicate existing IDA programs.
The widespread adoption of IDAs occurred without much support or pressure on nonprofits from the public sector. While a few small government programs were implemented that borrowed on the IDA concept, they were the exception, not the rule, and did not represent anything near full-scale IDA implementation.

In more recent years, the federal government has given states more flexibility to utilize federal welfare dollars to implement IDA programs. In particular, the Personal Responsibility and Work Opportunity Reconciliation Act of 1996 authorizes states to create IDA programs with Temporary Assistance for Needy Families (TANF) block grant funds and to disregard all money saved in IDAs in determining eligibility for all means-tested government assistance. All deposits into the IDAs are limited to earned income. Twenty-four states currently provide IDAs for TANF recipients in their state plans, however, states are neither required to use TANF funds to create IDA programs nor are they penalized for failing to do so. In short, the coercive pressures to adopt IDA are relatively low.

The decision by these states (as well as other non-governmental organizations) to embrace IDAs is not the result of independent, rigorous social science evaluations that have shown this policy tool to be effective. After all, the evaluation literature on IDAs is almost nonexistent. A few small-scale assessments of specific IDA initiatives have been done, but none have utilized rigorous evaluation methods, such as random assignment or comparison groups. The absence of rigorous evaluation research on IDAs is, in part, by design. Foundations that fund IDA evaluations have at times been protective of these programs and held back data. In this independent evaluation vacuum, IDAs have enjoyed a period of accelerated replication and growth.


YouthBuild is a youth and community development program that offers job training, education, counseling, and leadership development to hard-to-employ youth and high school dropouts, between 16–24 years old, through the construction and rehabilitation of affordable housing in their own communities. Participants are trained in the construction trades on a 12-month cycle, and the affordable housing developed through their efforts is typically owned and managed by community-based organizations, including community development corporations.

YouthBuild began in 1978 as a part of the Youth Action Program—a nonprofit, community-based organization supported by municipal and philanthropic grants in East Harlem. Initially, this initiative was designed to improve the lives of hard-to-employ youth by training them with basic life skills and job skills through the construction of affordable housing in poor Harlem neighborhoods.

While YouthBuild was able to generate local interest and support for its initial sites, it was unable to distinguish itself from the large number of youth and community development programs operating across the nation. For many government and philanthropic funding agencies, YouthBuild was perceived as one program among countless others with a similar mission. Funding agencies interested in supporting youth and community development programs could express their support for such efforts in a variety of ways—whether or not they supported YouthBuild. In order to improve the visibility and position of YouthBuild among the many youth and community programs across the nation, a coalition of local nonprofit organizations was formed in 1988 to pursue a strategy of national replication. This coalition attempted to expand this program beyond New York City, address the program’s slow and episodic replication in other sites, and develop a strategy to achieve these goals. These efforts eventually succeeded with the power and persuasion that government funding carries.

YouthBuild experienced dramatic growth in the mid-1990s after Congress passed the Housing and Community Development Act of 1992, which set aside funds for programs like YouthBuild. Between 1993 and 1997, the U.S. Department of Housing and Urban Development awarded $158 million to fund YouthBuild programs, and the Corporation for National and Community Service selected YouthBuild USA as a national-direct grantee to develop YouthBuild AmeriCorps programs in six communities, including funds for education awards (scholarships) to YouthBuild graduates. YouthBuild was able to raise substantial financial support from the Ford Foundation, the Lilly Endowment, the Charles Stewart Mott Foundation, and the DeWitt Wallace–Reader’s Digest Fund. This fundraising success helped create 100 YouthBuild sites in 90 cities across 35 states, involving 3,500 participants.

The federal government’s endorsement of this program had a ripple effect. It spurred philanthropic investments while also prompting state and local governments interested in accessing these resources to import the YouthBuild program to their cities and communities. For communities with large numbers of hard-to-employ youth, governments were quick to embrace YouthBuild, given the general scarcity of public funds for programs to help this population.

YouthBuild providers are connected through the YouthBuild USA Affiliated Network—an association of YouthBuild providers that creates and monitors program design and outcome standards for YouthBuild programs. This network resides within YouthBuild USA, a nonprofit organization that provides technical assistance to YouthBuild sites and assists with the continued development and replication of the program.

Expansion of YouthBuild occurred without any rigorous independent research assessing its impact. In brief, it was scaled up to the national level with public funds even though there was no published evidence demonstrating the program worked, or that it represented an improvement over existing youth workforce development programs, such as the Job Corps. A subsequent evaluation followed 177 YouthBuild participants, measuring labor market and educational attainment outcomes, as well as behavior measures related to time management, leadership proclivity, and substance use. Seventeen percent dropped out of the program, while 38 percent of participants in the study went on to full- or part-time employment, school, or training. Of those who were employed after completing the YouthBuild program, 66 percent went into construction-related jobs at an average wage of $7.60 per hour. Of those who were employed in non-construction related jobs, the average wage was $6.80 per hour. Unfortunately, this evaluation did not use random assignment or a comparison group, making it impossible to determine whether these outcomes are the result of YouthBuild or whether they represent labor market outcomes that would have occurred without the YouthBuild intervention. Moreover, it is unclear whether these outcomes represent an improvement over other youth workforce development programs.

In spite of the limited evaluation research on YouthBuild, the federal government’s endorsement of this program has forced local officials to adopt the program if they want to expand or enhance their youth workforce development programs. As a result, YouthBuild remains an unproven program where local support is largely the result of federal government pressure to adopt this initiative.

Rethinking the Context of Public Sector Contracting

These three cases provide a glimpse of the variation in the type and extent of institutional pressure brought to bear on public managers. As summarized in the following table, each example varies by the extent of institutional pressure that was present at the time of adoption and subsequent expansion.

To carry out this delicate task, we draw on a branch of organization theory that has come to be known as neo-institutionalism (Dimaggio and Powell 1991; Meyer and Rowan 1991; Scott 1991 & 1998; Zucker 1987)—an approach to organizations that has yet to penetrate very deeply into the literature of public administration. By applying this part of institutional theory and by bringing it into contact with public administration theory, we hope to both expand the repertoire of explanations of public sector decision-making and to shed some light on the difficult question of why research and action can be decoupled. To be sure, the literature of public administration has advanced several explanations for ineffective administration. Bureaucratic inertia, budgetary politics, and other traditional administrative challenges may well impinge on a public manager’s ability to execute effectively. What we suggest here, however, is that it may possible to step outside this literature into organizational sociology to locate a different and potentially useful framework for thinking about the issue of rationally decoupled contracting efforts.

Isomorphism is a central concept of organizational theory to describe these forces—the tendency for organizations to display fewer unique features and increasingly adopt uniform structures. Sociologists observing the behavior of organizations, including those in the nonprofit sector, identify three types of isomorphism at work:

Mimetic (or imitative) isomorphism, in which entities within the same industry model themselves on other organizations;

Normative isomorphism, in which the specialized knowledge and skills of professionals is backed by universities and spread by industry or professional associations and publications (such as this one!) that span many organizations; and

Coercive isomorphism, in which external bodies (government, funding sources, etc.) impose formal (or informal) conditions for support or approval.

The three types of institutional pressures—normative, mimetic, and coercive—capture the range of institutional pressures that may influence the adoption and expansion of public programs that are managed and implemented by nonprofit organizations.

DARE represents an example in which all three dimensions of institutional pressure are high. Among local school administrators and law enforcement officials, a strong normative commitment to preventing drug use and subsequent crime was an important aspect of this program’s genesis and adoption in our nation’s school system. Other like-minded professionals from across the nation, with a shared concern for these issues, were quick to embrace DARE as a logical policy response. Moreover, schools and local law enforcement agencies were quick to adopt DARE so as to signal a shared belief with the public that drugs and drug use are bad. Reluctance to send such a signal might label a school or local law enforcement agency as a supporter of drug legalization.

These strong normative pressures were accompanied by high levels of coercive and mimetic institutional forces. The primary coercive pressure involved the passage of federal legislation in 1986 mandating schools to implement a comprehensive drug prevention program. Since schools were compelled by government to deliver these initiatives, many schools were faced with the uncertainty of not knowing whether substance abuse prevention programs worked or whether the type of program offered would produce different results. This level of uncertainty generated substantial mimetic pressure among local school officials to import an existing program. Many school officials simply turned to the most visible initiative at the time—DARE. Together, these high levels of institutional pressure surrounding the DARE program forced its widespread adoption even though there was no systematic evidence to suggest it was effective. Moreover, the persistence of these institutional pressures has made it almost impossible for this program to be dismantled even though very rigorous evaluations suggest it is ineffective.

Much like DARE, IDAs also evolved out of high normative and mimetic pressures; however the nature of these pressures was slightly different. The push to adopt IDAs grew out of the desire among social service professionals to create a vehicle for assisting poor families to generate wealth. Administering in-cash income support programs was perceived as a necessary (but not sufficient) response to poverty, since they did not provide a basis for helping poor families become upwardly mobile. They prevented material hardship without offering a means to achieve self-sufficiency. In order to address this perceived deficiency, IDAs were slowly embraced among social service professionals.

The mimetic pressure surrounding IDAs was rooted in the common position of many social service agencies and other local community-based organizations with limited resources. These groups were looking for ways to at least symbolically send signals to clients that they had a policy or program that could help clients achieve upward mobility. Adopting these programs would also send signals to potential funders that these organizations were in the business of changing peoples’ lives rather than maintaining the status quo. Since IDAs could be implemented on a very small scale, often affecting only a few hand-picked participants in each location, many social service agencies and community-based organizations adopted this policy.

One substantial difference between DARE and IDAs is the level of coercive institutional pressure. At the moment, there is no federal law that requires states or localities to operate IDA programs. Since 1996, states have been allowed to use welfare block-grant money to pay for IDA programs, but there is no requirement to do so. This level of institutional pressure has resulted in fairly widespread replication of IDA programs even though there is no clear evaluation research demonstrating its success. This process of replication has occurred from pressures within social service organizations and several philanthropic funders, but unlike DARE, has not been forced on nonprofit managers and local government administrators by federal mandate.

In contrast, YouthBuild has expanded on an incremental basis, fueled largely by coercive pressures but without strong normative or mimetic pressures. At bottom, there was little that differentiated YouthBuild from the thousands of other similar workforce development programs for teenagers scattered across the nation. While normative pressures were present among nonprofit organizations and others involved in workforce development issues to fund programs that would help disadvantaged youth enter the labor market, there were no unique forces at play within these organizations to explain why YouthBuild would have gained an advantage. Similarly, mimetic pressures were also low. However, the coercive pressures imposed by the U.S. Department of Housing and Urban Development’s endorsement of this initiative (without any evidence that it worked) caused the program to grow incrementally across the nation.

In the end, institutional pressures played very different roles in shaping the implementation of the three programs examined here. In each instance, these pressures significantly shaped the pattern of implementation that was followed and set the context within which evaluation research was or was not used to inform contracting strategies. Of course, the idea of making a link between evaluation research and decision-making is appealing. It holds out the possibility that public management can actually improve over time as knowledge is translated into action. What we have argued here is that institutional pressures can and do shape the meaning and implications of evaluation research and make it difficult for public managers to establish a strong link between information and action. In some cases, like DARE, this has led to vast amounts of resources being expended on a program that almost all research indicates simply does not work. In other cases, like IDAs, program replication has proceeded largely ahead of the analysis of evaluation research.

While it is tempting to focus only on what public agencies do well and the programs they implement that work best, we believe that greater attention, both in the form of further empirical and theoretical work, needs to be directed at cases of failures, where decision-making breaks down and where programmatic results are disappointing. Only by shifting the focus from “what works” to “what does not work” is it likely that researchers will be able to develop useful diagnostic tools for managers that can help them improve in their work. A key starting point, we believe, is the greater application of institutional thinking to public administration. Too long separated from the best ideas in organization theory (Bozeman 1987; Rainey, 1997), public administration theory can be enriched by further pursuing points of intersection with the broader literature on organizations. One insight that emerges at this point of contact is the need to focus on the overlooked challenge of managing the institutional pressures around public agencies—pressures that incline these agencies toward institutional isomorphism.

Beyond opening itself up to new theoretical inputs, public administration in general and contracting theory in particular needs to move beyond the single-minded focus on the importance of information to good policy-making. Sound contracting involves not just having good evaluation research to guide action, but also an environment in which institutional pressures are minimized so that information can be processed and used meaningfully. Without focusing on ways of controlling the tendencies of public agencies toward isomorphism, even the best evaluation research will fail to inform decision-making.

Putting public managers into an environment with good evaluation data and amidst low institutional pressures requires a fair amount of work and vigilance, both in collecting data and shielding the decision-making process from professional peer pressure, the inclination toward mimesis, and the power of coercion from funders at higher levels of government. A critical first step in fostering this organizational space for public managers is creating and preserving high levels of independence and autonomy. Only when public managers are shielded to some extent from the pressures in the environment will they be in a position to make good use of evaluation research and make the link between data and decision-making. In this sense, the institutional approach to public administration requires a radical rethinking of the meaning and form that accountability and oversight should take in the public sector. It may just be that the only way to improve the link between information and action is to shield public managers from at least some of the external forces that pull and push them in their decision-making. Loosening networks and reducing control from higher levels of government will not be easy, but the payoff in terms of more grounded decision-making and wiser stewardship of public resources could be significant. If institutional theory were to force public administration to explore these issues more fully and systematically, its contribution to practice would be lasting and important indeed.