Editor’s Note: This article was originally published in the Winter 2010 edition of Leader to Leader. It has been republished here with permission.
The strategic plan is done. The objectives are clear. The time frame is set. The Board has done its work…until someone utters the word metrics. “How are we going to measure the outcomes?” comes the call. “Don’t we have to evaluate our Executive and our organization somehow?” And suddenly the “work of the Board” seems to once again blossom anew.
Strategic plans are most vulnerable not in their development, but in their implementation. And implementation often hinges on some measurable indication of progress. Without those metrics, the plan is a group of intentions always on the verge of greatness. Without hard data on which to anchor organizational outcomes, the organization can wobble off course without a clear warning signal.
But measurement is a daunting field. Decades of work in the sciences, engineering, theory building, and psychological testing have generated rules and models that require statistical sophistication and research to implement. Except for large national nonprofit groups, most nonprofit budgets just cannot afford such luxuries when scarce resources are needed to deliver services. Yet governmental agencies, accrediting bodies, foundations, and individual donors want some (even imperfect) attempts at assessments of outcomes. It is better to try to assess outcomes than to approach these organizations empty-handed.
Not being able to afford the time and money to develop excellent metrics, nonprofits often have to glean whatever value they can from using imperfect metrics. To be more precise about the term imperfect, we mean metrics that are anecdotal, subjective, interpretive, or qualitative. Or perhaps the metric relies on a small sample, uncontrolled situational factors, or cannot be precisely replicated. For most nonprofits, it is nevertheless a great leap forward from doing nothing to using even seriously flawed by reasonably relevant measures for their critical goals. Aside from technical requirements, the most critical requirement is that both the board evaluator and the operating manager agree that the process is reasonable and that the outcomes from it constitute fair and trustworthy information. With that goal in mind, we can explore how to use an imperfect metric well.
What Should Be Measured?
We see metrics at a fragile point conceptually. They are partly defined by the strategic objectives of the non-profit organization; that is how you decide what to measure. It would be easier to measure organizational activities, but the nonprofit board’s proper focus is on outcomes, not organizational efforts. The frequent temptation, however, is to look into the operational level of the organization, where potential metrics abound. It is the tension between “what we should measure” and “what we can measure.”
But all the really important things seem almost impossible to measure. An organization can create reasonable indicators of finances, membership, clients served, attendance, and other operational measures. But how does it measure actual results in the world outside, such as enhanced quality of life, elevated artistic sensitivity, community commitment, successful advocacy, or any of the other honorable but inherently vague goals that not-for-profits frequently adopt?
Metrics are equally constrained by the technical requirements of good measurement. There are standards, we are told, for a “good measure.” So there is a substantial pressure to develop more precise metrics, regardless of whether they are strategic or operational in focus.
If the nonprofit pushes for technically correct metrics, it often means months of tedious board debate and volunteer time, but it has been our observation that an organization frequently ends up with good measures of peripheral events, such as changes in attendance at the annual dinner. Decades of blindly implementing “Management by Objectives” have made such practices routine. As a result, nonprofits are left with precious little that tracks the relevant outcomes defined in the strategic plan. They focus on what they can measure instead of what they should measure.
Nonprofits need not choose between having “no measures” or the high cost of developing perfect measures. The better answer is to learn how to use imperfect but relevant metrics well.
But how could an imperfect metric be useful? Wouldn’t it contaminate the whole process? An example may help to clarify the benefits.
As part of its accreditation process, the prestigious American Assembly of Collegiate Schools of Business allows accredited schools to utilize local business executives to conduct mock interviews for assessing graduating students’ communications and presentation skills. The purpose is to obtain the executives’ estimates of the skill levels the students have acquired during their undergraduate years. The insights garnered from these sessions can be interpretative, subjective, and anecdotal, and based on the experiences of the evaluator. Consequently, their comments reflect the viewpoint of each interviewer as much as the actual achievement of the interviewees. In short, it is a very imperfect measure.
Nonetheless, the process can allow the schools to:
- Obtain outside perspectives of the communications learning that students have acquired and better understand minimum business expectations.
- Improve communications between the faculty and the business community.
- Indicate to students that academic content has practical values beyond helping to pass tests.
- Provide insights for curriculum change and for faculty research.
In short, imperfect metrics used well can have positive benefits! The three cases below provide more examples of how metrics that fail the rigorous standards of scientific measurement might still serve the needs of a nonprofit organization.
How Imperfect Metrics Can Provide Positive Outcomes
Families Primary is a nonprofit counseling service offering a range of services to improve mental health in the metropolitan community in which it is located. Services range from individual counseling to being legal conservators for elderly clients. The mission of the organization is to reduce mental health problems in the community. Local county health officials have noted a significant increase in inner-city mental health problems. However, the use of the agency’s services by these residents was very modest. The costs of conducting a reasonably comprehensive client attitude and mental health needs assessment study would be too high. Yet not measuring these key outcomes leaves the agency vulnerable to any number of criticisms.
The president/CEO was evaluated by a board assessment committee. Committee members took primary responsibility for establishing board-approved goals and evaluating specific outcomes (for examples, finance, personnel, and fund development) of the operation. The person assigned to client development was asked to create outcome measures for improving the perception of Family Primary among inner-city residents. He and the CEO concluded that a board member needed to interview the executive directors of five inner-city community centers to obtain a macro-assessment of the agency’s images. Cost constraints prohibited developing more precise outcome metrics.
All five executive directors unanimously reported the local residents were “uncomfortable” with the agency’s staff.
Based on the interviews’ outcomes, the board member and the CEO agreed that in 12 months, the board member would revisit the five executive directors to assess changes in perceptions, based on corrective actions to be instituted by the CEO. It was the responsibility of the CEO to devise the corrective actions that were needed to drive change. Subsequently, a quantitative goal would be set for recruitment of inner-city clients.
The next year, the executive directors reported some improvement in perceptions. However, it took a second year of corrective actions before the board member and the CEO agreed it was time to establish quantitative outcomes to evaluate performance. Incremental achievement had taken place driven by an imperfect outcome measurement.
The metrics were admittedly qualitative, subjective, and vulnerable to unpredictable distortions; in short, they were imperfect. But they provided a focus on relevant issues. They linked players into productive conversations with each other about where to spend resources and when to revisit their progress. There was a process robust enough to benefit from poo