Social science has proven especially inept in offering solutions for the great problems of our time—hunger, violence, poverty, hatred. There is a pressing need to make headway with these large challenges and push the boundaries of social innovation to make real progress. The very possibility articulated in the idea of making a major difference in the world ought to incorporate a commitment to not only bring about significant social change, but also think deeply about, evaluate, and learn from social innovation as the idea and process develops. However, because evaluation typically carries connotations of narrowly measuring predetermined outcomes achieved through a linear cause-effect intervention, we want to operationalize evaluative thinking in support of social innovation through an approach we call developmental evaluation. Developmental evaluation is designed to be congruent with and nurture developmental, emergent, innovative, and transformative processes.
Helping people learn to think evaluatively can make a more enduring impact from an evaluation than use of specific findings generated in that same evaluation. Findings have a very short ‘half life’—to use a physical science metaphor. They deteriorate very quickly as the world changes rapidly. In contrast, learning to think and act evaluatively can have an ongoing impact. The experience of being involved in an evaluation, then, for those actually involved, can have a lasting impact on how they think, on their openness to reality-testing, on how they view the things they do, and on their capacity to engage in innovative processes.
Not all forms of evaluation are helpful. Indeed, many forms of evaluation are the enemy of social innovation. This distinction is especially important at a time when funders are demanding accountability and shouting the virtues of “evidence-based” or “science-based” practice. The right purpose and goal of evaluation should be to get social innovators who are, often by definition, ahead of the evidence and in front of the science, to use tools like developmental evaluation to have ongoing impact and disseminate what they are learning. There are a few specific contrasts between traditional and more developmental forms of evaluation that are worth reviewing (see table on page 30).
Developmental Evaluation
Developmental evaluation refers to long-term, partnering relationships between evaluators and those engaged in innovative initiatives and development. Developmental evaluation processes include asking evaluative questions and gathering information to provide feedback and support developmental decision making and course corrections along the emergent path. The evaluator is part of a team whose members collaborate to conceptualize, design and test new approaches in a long-term, on-going process of continuous improvement, adaptation, and intentional change. The evaluator’s primary function in the team is to elucidate team discussions with evaluative questions, data and logic, and to facilitate data-based assessments and decision-making in the unfolding and developmental processes of innovation.
Adding a complexity perspective to developmental evaluation helps those involved in or leading innovative efforts incorporate rigorous evaluation into their dialogic and decision-making processes as a way of being mindful about and monitoring what is emerging. Such social innovators and change agents are committed to grounding their actions in the cold light of reality-testing.
Complexity-based, developmental evaluation is decidedly not blame-oriented. Removing blame and judgment from evaluation frees sense and reason to be aimed at the light—the riddled light—for emergent realities are not clear, concrete, and certain. The research findings of Sutcliffe and Weber help explain. In a Harvard Business Review article entitled “The High Cost of Accurate Knowledge” (2003), they examined the predominant belief in business that managers need accurate and abundant information to carry out their role. They also examined the contrary perspective that, since today’s complex information often isn’t precise anyway, it’s not worth spending too much on data gathering and evaluation. What they concluded from comparing different approaches to using data with variations in performance was that it’s not the accuracy and abundance of information that matters most to executive effectiveness, it’s how that information is interpreted. After all, they concluded, the role of senior managers isn’t just to make decisions; it’s to set direction and motivate others in the face of ambiguities and conflicting demands. In the end, top executives must manage meaning as much as they must manage information.
As a complexity-based, developmental evaluation unfolds, social innovators observe where they are at a moment in time and make adjustments based on dialogue about what’s possible and what’s desirable, though the criteria for what’s “desirable” may be quite situational and always subject to change.
Summative judgment about a stable and fixed program intervention is traditionally the ultimate purpose of evaluation. Summative evaluation makes a judgment of merit or worth based on efficient goal attainment, replicability, clarity of causal specificity, and generalizability. None of these traditional criteria are appropriate or even meaningful for highly volatile environments, systems-change-oriented interventions, and emergent social innovation. Developmentally-oriented leaders in organizations and programs don’t expect (or even want) to reach the state of “stabilization” required for summative evaluation. Staff in such efforts don’t aim for a steady state of programming because they’re constantly tinkering as participants, conditions, learnings, and context change. They don’t aspire to arrive at a fixed model that can be generalized and disseminated. At most, they may discover and articulate principles of intervention and development, but not a replicable model that says “do X and you’ll get Y.” Rather, they aspire to continuous progress, ongoing adaptation and rapid responsiveness. No sooner do they articulate and clarify some aspect of the process than that very awareness becomes an intervention and acts to change what they do. They don’t value traditional characteristics of summative excellence such as standardization of inputs, consistency of treatment, uniformity of outcomes and clarity of causal linkages. They assume a world of multiple causes, diversity of outcomes, inconsistency of interventions, interactive effects at every level—and they find such a world exciting and desirable. They never expect to conduct a summative evaluation because they don’t expect the change initiative—or world—to hold still long enough for summative review. They expect to be forever developing and changing—and they want an evaluation approach that supports development and change.
Moreover, they don’t conceive of development and change as necessarily improvements. In addition to the connotation that formative evaluation (improvement-oriented evaluation) is ultimately meant to lead to summative evaluation (Scriven, 1991), formative evaluation carries a bias about making something better rather than just making it different. From a complexity-sensitive developmental perspective, you do something different because something has changed—your understanding, the characteristics of participants, technology, or the world. Those changes are dictated by your latest understandings and perceptions, but the commitment to change doesn’t carry a judgment that what was done before was inadequate or less effective. Change is not necessarily progress. Change is adaptation. Assessing the cold reality of change, social innovators can be heard to say:
“We did the best we knew how with what we knew and the resources we had. Now we’re at a different place in our development—doing and thinking different things. That’s development. That’s change. But it’s not necessarily improvement.”
Jean Gornick, ED, Damiano, Duluth, MN
The thrust of developmental evaluation as an approach to operationalizing the evaluative thinking mindset involves integrating hope and reality-testing, simultaneously and, perhaps paradoxically, embracing getting-to-maybe optimism and reality-testing skepticism. The next section illustrates one effort in integrating hope and reality-testing.
Hope and Reality-Testing
In 1977 three Roman Catholic nuns started St. Joseph’s House in the inner city of Minneapolis. They were inspired by Dorothy Day’s philosophy of “comforting the afflicted and afflicting the comfortable.” The sisters took their passion public and convinced individuals and churches all over the metropolitan area to support them. Over the years thousands of women and kids found compassionate shelter, dozens of volunteers came to the inner city, women and children who were and had been homeless built a community around St. Joe’s hospitality, and the sisters became leaders in fighting against violence and injustice. But by the early 1990s their environment had changed. The block surrounding St. Joe’s had become the center of a crack cocaine epidemic; drug dealers claimed the streets; and landlords had abandoned many buildings. St. Joe’s guests and families living on the block hid their children inside, police regularly ran through the block with guns drawn, drug dealers and prostitutes (desperate themselves) broke into abandoned buildings. At the north end of the block where two major streets intersected, once-thriving small businesses were abandoned.
This is what Deanna Foster and Mary Keefe faced when they took over the leadership of St. Joe’s (now Hope Community, Inc.). They decided to attempt a housing revitalization project and began by trying to talk with local residents, but people were afraid to talk, afraid of the drug dealers and perpetrators of violence. Residents on the block wouldn’t even come out to talk. They just said, “We tried for many years and failed . . . we’re burned out. We’re not going to try again.”
The Hope Community began confronting this reality in light of their vision of a vital, engaged community.
Based on their early success in ridding the community of one major drug house and their long-term commitment to that area, the leaders and community came together to shape a new vision and found support for that vision when a door suddenly opened. They garnered unexpected support from a major philanthropic donor in response to a request they had made.
We didn’t fully understand at the time, but it really was a unique vote of confidence in Hope. One day, the mail comes, and we open it up, and there’s a hand-written check for $500,000! We put it in the bank and for the next three months I don’t think I slept more than two hours a night. I worried, ‘How are we going to be good stewards? How will we not waste it?’ This serious investment totally called our bluff. We had this big plan, and suddenly someone believed in it and backed up that belief in a big way. We had to refine our own understanding of what our future was going to be, and how we were going to shape it. It’s one thing to have an idea about something, it’s another thing to be responsible for actually nurturing that idea and bringing it forward in a responsible way.
The door opening brought both terror and delight, sleepness nights and energetic days.
They had created a vision for a major community revitalization effort centered around a Children’s Village. Hope and vision brought out the skeptics.
Sign up for our free newsletters
Subscribe to NPQ's newsletters to have our top stories delivered directly to your inbox.
By signing up, you agree to our privacy policy and terms of use, and to receive messages from NPQ and our partners.
We never said we were going to build the whole thing. Children’s Village was a vision. But it shocked people. It really shocked people. Some were pleasantly shocked and then said, ‘Well, that was fun,’ and went on their way. Other people were critical, saying “It’s totally unrealistic and ridiculous for a small organization like Hope to even contemplate. It will never happen.” Everyone picked different parts of it to criticize. Suddenly we were out there in the public eye, and we didn’t know how Children’s Village was going to happen. We only knew it would.
They faced the criticism. They faced the critics. But they did so emergently, by finding the flow in the community, facing the daunting reality of what might lie ahead, and began working day to day—acting, monitoring, getting feedback, learning, acting, in a cycle of emergence. In their own words:
We almost had to do it, not backwards, but in alternate order. Normally, when an organization gets half a million dollars they have spent a lot of time in a more linear process thinking through what they are going to do. What is the goal? What is the work plan? What will it cost? Who is the staff? You get the community input, all that stuff, and then have this whopping proposal, right? But it didn’t happen that way at all. It was ‘Here’s the vision, here’s the money, now, make it happen.’
And the very absence of a traditional linear planning process became a new source of criticism and complaint.
One of the criticisms we get is that we don’t have a linear, goal-directed approach. We don’t assume where we are going. We ask: Who’s here? What are people experiencing? What are they believing and hoping? What is their understanding of community? And what is our understanding of all the things we’ve done?
But it’s more complex than that because, at the same time, there’s a whole set of strategic thinking that’s going on. We also have to ask: Where is the land out there? Where’s the money? What are the opportunities? Where are the potential partners? What are the potential pitfalls? How could all this fit together? What would happen if we did this?
While these questions are evaluative in nature, they differed radically from the kinds of “linear, goal directed” questions which would be key to most traditional evaluations. Evaluators speak of “summative” evaluations which are focused on finding out whether the program worked, were the goals realized, should the program be continued, and on setting up data gathering methods to determine the answers to these questions, early in the process. Instead, Hope’s leadership focused on an open-ended approach to data gathering, where the questions and concerns were emergent, and where trial and error was carefully mined for learning.
Often we may try things that don’t necessarily succeed on their own, but end up teaching us something and creating other opportunities. We bought a house and sold it a short time later, but we recouped our money, learned about the block the house was on and from that house came one of our best tenant leaders. Another lesson came when we were smaller. We tried having our own construction company, learning quickly about the limits of that strategy and acting accordingly.
A lot of it has to do with intuition, but intuition is not just a thought that comes to you randomly. This intuition grows out of very strategic integrated thinking. We’re constantly operating in this huge matrix of reality. We’re not just focusing on our relationships with people in the neighborhood and ignoring, for instance, all the real estate developers. People are out there buying and selling real estate, and if you look closely, often ripping people off. But we immersed ourselves in that community because we had to—it was a major part of what was going to impact our neighborhood. We have to deal with the city and the planning department and a multitude of other public agencies. You are constantly immersed in that total picture and informed by it, and then strategically respond to opportunities.
This approach to reality testing took a form different from most evaluations. It defined reality as messy, not orderly, emergent, not controlled and social innovation as an iterative process of experimentation, learning, and adaptation. The Hope Community leadership lived out a complexity perspective, seeing and engaging the connections between the micro and macro. They monitored the big picture—national housing, community development, and real estate patterns; interest rates and international finance; government policies, philanthropic funding trends and priorities; research on community revitalization. They had a keen sense of the history of the community. At the same time, they were fully enmeshed in the day-to-day reality of work in the community, including engaging local government inspectors, city planners, social service agencies working in the community, local businesses, and local funders.
Complexity-based developmental evaluation shifts the locus and focus of accountability. Traditionally accountability has focused on and been directed to external authorities and funders. But for value-driven social innovators the highest form of accountability is internal. Are we walking the talk? Are we being true to our vision? Are we dealing with reality? Are we connecting the dots between here-and-now reality and our vision? And how would we know? What are we observing that’s different, that’s emerging? These become internalized questions, asked ferociously, continuously, because they want to know.
That doesn’t mean that asking such questions and engaging the answers, as uncertain as they may be, is easy. It takes courage to face the possibility that one is deluding oneself. Here the individual’s sense of internal and personal accountability connects with a group’s sense of collective responsibility and ultimately connects back to the macro, to engage the question of institutional and societal accountability.
References
Foster, Deanna and Mary Keefe, 2004, “Hope Community: The Power of People and Place,” End of One Way, Minneapolis, MN: The McKnight Foundation.
Patton, Michael Q., 1994, “Developmental Evaluation.” Evaluation Practice 15 (3): 311-20.
Patton, Michael Q. 1997, Utilization-Focused Evaluation, 3rd ed., Thousand Oaks, CA: Sage Publications.
Patton, Michael Q. 2002, Qualitative Research and Evaluation Methods, 3rd ed., Thousand Oaks, CA: Sage Publications.
Scriven, Michael. 1991. Evaluation Thesaurus. 4th edition. Newbury Park, CA: Sage.
Sutcliffe , Kathleen and Klaus Weber. 2003. “The High Cost of Accuracy.” Harvard Business Review, May.
Westley, Frances, Brenda Zimmerman and Michael Patton, forthcoming 2006. Getting to Maybe. Toronto: Random House Canada.
Michael Quinn Patton is an independent consultant, author of five books on evaluation, and former president of the American Evaluation Association. This article is part of a chapter from Getting to Maybe, a forthcoming book written with Frances Westley and Brenda Zimmerman.