Carol Upshur accurately describes the five major types of evaluation–responsive/clarifying; accountable/reporting; goal/process-oriented; program outcome/impact; and experimental. I suggest each of these activities has a strong interrelationship with the other and that the glue that binds them is organizational mission. This may seem obvious to some nonprofit leaders, but in evaluation literature, including publications specially tailored for nonprofits, the central question of whether or not organizations are accomplishing their mission often remains unasked.
How can this be? As we know, mission is the central organizing purpose for a nonprofit’s existence–it is the bottom line toward which everything else should be focused. As the nonprofit sector is in the early or formative stages of instituting evaluative practices, I suggest that we focus firmly and relentlessly on reframing evaluation as an ongoing activity that documents the cumulative effects of agency efforts toward mission accomplishment. In this article, then, I will build on the theme of mission-based evaluation.
Mission-based evaluation is predicated on the premise that a nonprofit organization must ultimately be held accountable for the effective accomplishment of its mission. An organization is granted nonprofit status based upon the purpose for which it exists; it is this public purpose that gives the nonprofit organization legitimacy. Yet because the “market” in which nonprofits exist often means they are funded by one set of entities while being organized for the benefit of others, the issue of who the organization must answer to, and for what it must answer, inserts itself in sometimes pernicious ways in the focus of the organization. Many human service agencies, for instance, have multiple contracts with public funding sources who want programs to be run on their behalf by community-based nonprofits. In these instances, evaluation studies the standards of program provision under the individual contracts. Public funders seem to care little about whether the organization’s mission is being forwarded by the provision of the program, effective or not. Somewhat similarly, many private foundations will fund individual programs, often only providing evaluation money when they are interested in testing or promoting a program model. When the organization is the program model, this works out fine; but organizations made up of multiple programs suffer under this paradigm.
I contend that funding organizations who depend upon healthy nonprofits to implement their own missions should act as responsible partners and invest in the public mission of the organization, thus encouraging nonprofit accountability to the mission. Determination of accountability should be based on whether the organization’s mission is related to its public purpose, its programming addresses this purpose, and its programming has an impact on the problem(s) identified in the mission. Evaluation should demonstrate:
- The public benefit of the organization’s purpose.
- The appropriateness of the direction the organization is taking toward the fulfillment of its mission.
- What significance the impact of the actions has on accomplishing the mission.
The first step toward mission-based evaluation is to clarify and recommit to the mission. The mission should always clearly state the end results the organization intends to accomplish.
- The purpose defines the problem(s) that justify the organization’s existence and the target group that the problem(s) affects.
- The service domain defines the activities the organization engages in to address the problem. It may also define geographic or service span.
- The results define the ultimate goal of the organization’s activities–what will need to occur for its mission to be accomplished. Once this goal is attained, the organization no longer has a reason to exist.
- The values base guides the practice. It provides a mechanism for determining how the work will be accomplished and is often closely related to the organizational goal.
To illustrate, consider the mission of the Women’s Institute for Housing and Economic Development: “To eliminate housing and economic deprivation for low-income women and their children by 1) providing technical assistance on housing and business development to community and women’s groups; 2) developing innovative affordable housing; and 3) helping women create jobs for themselves so that they can provide housing and economic security for themselves and their children.”
From this statement one can identify the purpose (to address the problem of housing and economic deprivation among low income women and children); the service domain (providing technical assistance on housing, developing affordable housing, and helping women to create jobs); and the ultimate goal or results (low-income women will become able to provide economic security for themselves and their children). Also contained within this last statement is a short-hand statement of values, which is to promote self-empowerment and self-help. A mission-based evaluation determines the degree to which the organization is moving toward the accomplishment of this goal.
As nonprofit leaders know, the operational challenge is to convert mission into action, which is often accomplished by constructing a strategic plan. This type of yearly or multi-year planning process should include rigorous evaluation that includes the questioning of assumptions on which programs are based and a reevaluation of the agency’s position vis-à-vis its goal, its primary beneficiaries, and its environment. Often, such plans lay out strategic priorities or goals, and each strategic goal should be evaluated. The type of evaluative information you’re trying to gather–responsive, accountable, process, outcome–should be carefully thought through. These evaluative indicators can be tracked and assessed at various intervals to ensure movement toward goal completion in the current year, and will also feed useful information into the next year’s planning process.
In the Women’s Institute’s example, a strategic goal is to help low-income women engage in entrepreneurial activities leading to self-employment. The agency provides a business development program to achieve this goal. Now examine the steps required to accomplish this goal–identify participants, provide education and training, identify capital, and develop business plans. Each task can be monitored against the work plan, and problems and solutions, as well as the outputs per quarter, can be documented.
These evaluation activities generally fall in the accountable or process category. This level of evaluation will clarify how the organization is accomplishing its tasks. Questions might include: Is the program reaching and attracting the target population? Are program participants remaining in the program and, if so, why? What services are being well used and which seem to be off target? Are there any missing links evident in program delivery? Are the relationships with program collaborators progressing as anticipated?
When the organization is ready to move up a level in its evaluation practices, it will also ask outcome questions: What were the results of the program? What is different as a result of the program’s existence? Did the outcomes the program achieved meet the promises made?
Once an organization becomes proficient in outcome measures (and this is no easy task) it should take on impact questions. Impact is the outcome that can be directly attributed to the program: Has anything changed for the people who participated in this program compared to people of similar circumstance who did not participate?
This set of questions can produce disturbing initial results. For example, a number of long-term studies of programs that made “perfect sense” to funders and those that implemented them, and which attracted and retained participants, have shown a negative correlation to long-term success standards. Unfortunately, these types of studies are expensive and rarely funded in the nonprofit world unless the program is a nationally implemented model. They tend to require a large and diverse study sample that must be followed for an extensive period, and they are too often highly academic and divorced from practitioner and participant thinking. And in some cases, the social circumstances surrounding a particular issue are so complex that a negative correlation to originally devised success measures may actually indicate progress.
Such has been the case with programs dealing with violence against women. Heightened awareness has produced all types of outcomes including increased reporting and a backlash in attitudes. If we were to base our determination of the success of rape crisis and battered women programs on immediately reducing the reported incidence of these crimes, these highly successful programs would have been deemed a failure.
This gets to the issue of horizontal integration of mission between organizations with the same mission intentions, programs, and/or constituents. Had rape crisis and battered women programs been isolated rather than part of a national movement dedicated to ferreting out over time, and in constant dialogue, all the subtleties of what works and what doesn’t, and had they not been in intimate conversation with battered women and rape victims on those same questions, they may not have driven public policy and the practices of all of our major institutions as effectively as they have. The value of cross discussion of program models and outcomes is invaluable, but it requires each program to be absolutely dedicated to real success. This requires dialogue with the constituents who have first-hand experience of the results of the intervention.
The answers to these layers of questions inform us about how well an organization is progressing toward accomplishing its mission. It may need to redefine its strategies and tactics multiple times before it discovers what methods really work. This requires the constant adjustment and tailoring of activities due to changes in social issues, demographics, and the attainment of intermediate goals, but constant feedback