Social science has proven especially inept in offering solutions for the great problems of our time—hunger, violence, poverty, hatred. There is a pressing need to make headway with these large challenges and push the boundaries of social innovation to make real progress. The very possibility articulated in the idea of making a major difference in the world ought to incorporate a commitment to not only bring about significant social change, but also think deeply about, evaluate, and learn from social innovation as the idea and process develops. However, because evaluation typically carries connotations of narrowly measuring predetermined outcomes achieved through a linear cause-effect intervention, we want to operationalize evaluative thinking in support of social innovation through an approach we call developmental evaluation. Developmental evaluation is designed to be congruent with and nurture developmental, emergent, innovative, and transformative processes.

Helping people learn to think evaluatively can make a more enduring impact from an evaluation than use of specific findings generated in that same evaluation. Findings have a very short ‘half life’—to use a physical science metaphor. They deteriorate very quickly as the world changes rapidly. In contrast, learning to think and act evaluatively can have an ongoing impact. The experience of being involved in an evaluation, then, for those actually involved, can have a lasting impact on how they think, on their openness to reality-testing, on how they view the things they do, and on their capacity to engage in innovative processes.

Not all forms of evaluation are helpful. Indeed, many forms of evaluation are the enemy of social innovation. This distinction is especially important at a time when funders are demanding accountability and shouting the virtues of “evidence-based” or “science-based” practice. The right purpose and goal of evaluation should be to get social innovators who are, often by definition, ahead of the evidence and in front of the science, to use tools like developmental evaluation to have ongoing impact and disseminate what they are learning. There are a few specific contrasts between traditional and more developmental forms of evaluation that are worth reviewing (see table on page 30).

Developmental Evaluation

Developmental evaluation refers to long-term, partnering relationships between evaluators and those engaged in innovative initiatives and development. Developmental evaluation processes include asking evaluative questions and gathering information to provide feedback and support developmental decision making and course corrections along the emergent path. The evaluator is part of a team whose members collaborate to conceptualize, design and test new approaches in a long-term, on-going process of continuous improvement, adaptation, and intentional change. The evaluator’s primary function in the team is to elucidate team discussions with evaluative questions, data and logic, and to facilitate data-based assessments and decision-making in the unfolding and developmental processes of innovation.

Adding a complexity perspective to developmental evaluation helps those involved in or leading innovative efforts incorporate rigorous evaluation into their dialogic and decision-making processes as a way of being mindful about and monitoring what is emerging. Such social innovators and change agents are committed to grounding their actions in the cold light of reality-testing.

Complexity-based, developmental evaluation is decidedly not blame-oriented. Removing blame and judgment from evaluation frees sense and reason to be aimed at the light—the riddled light—for emergent realities are not clear, concrete, and certain. The research findings of Sutcliffe and Weber help explain. In a Harvard Business Review article entitled “The High Cost of Accurate Knowledge” (2003), they examined the predominant belief in business that managers need accurate and abundant information to carry out their role. They also examined the contrary perspective that, since today’s complex information often isn’t precise anyway, it’s not worth spending too much on data gathering and evaluation. What they concluded from comparing different approaches to using data with variations in performance was that it’s not the accuracy and abundance of information that matters most to executive effectiveness, it’s how that information is interpreted. After all, they concluded, the role of senior managers isn’t just to make decisions; it’s to set direction and motivate others in the face of ambiguities and conflicting demands. In the end, top executives must manage meaning as much as they must manage information.

As a complexity-based, developmental evaluation unfolds, social innovators observe where they are at a moment in time and make adjustments based on dialogue about what’s possible and what’s desirable, though the criteria for what’s “desirable” may be quite situational and always subject to change.

Summative judgment about a stable and fixed program intervention is traditionally the ulti