In this era of enormous deficits and deep cuts in government programs, it is increasingly clear that if there is to be progress in low income communities it will depend heavily on the success of grassroots groups in taking the initiative and organizing and improving their neighborhoods. No other organizations, public or private, are prepared to take on this extraordinarily important role.
From an evaluation point of view, this situation greatly increases the importance of developing ways to help the grassroots organizations which are tackling these tough issues to assess their work, examine what’s working and what isn’t, and learn lessons which they can apply to strengthen their organizations and increase their impact.
It is therefore essential that evaluations of grassroots efforts be designed to help organizations learn and build capacity. For foundations which are funding community organizing or other grassroots efforts, this emphasis on internal learning and capacity-building is crucial, for without strong, increasingly knowledgeable, and competent organizations to take the lead, their grants simply cannot achieve their goals and have the impact they want. Likewise, organizations need to develop the capacity for ongoing learning that will enable them to understand, in real time, what they are getting for their efforts.
This situation poses a major challenge to conventional thinking about evaluation. While funding organizations and nonprofits obviously must continue to be concerned about tracking and assessing performance, they must shift their thinking to become at least equally concerned about designing evaluation systems that build nonprofit effectiveness and learning capacity. For many of us in the third sector this will require a radical rethinking of traditional approaches to evaluation, including those who conduct assessments and their relationship with the organizations being assessed.
Funders, who are increasingly posing the question of effectiveness and evaluation, must first understand the internal systems which a grassroots group may have already developed to track and reflect on its performance. Without understanding how an organization currently learns, an evaluation could actually undermine the learning systems which the organization has found useful. In this context, forcing the organization to set up an entirely separate evaluation system to satisfy grant requirements could actually weaken an organization and jeopardize the grant’s success.
Many funders and professional evaluators fail to recognize that some community organizations are very disciplined and thorough in their internal reporting and assessment systems. Most community organizers, for example, are required to write weekly reports quantifying accomplishments such as how many new people they met, how many they recruited as members, how many people spoke up in a meeting for the first time, how many assumed new leadership roles, and similar facts concerning their organizing and leadership development work. Many also require periodic written reflections from their organizers. These reflections make self-assessment routine and provide the basis for discussion, critique, and suggestions by the organizer’s peers and supervisors. These are valuable systems, and funders and external evaluators should make sure that their evaluations reinforce and build upon these internal systems, and that any supplementary assessment techniques are as compatible and create the least possible burden.
Therefore, if a nonprofit is serious about learning, funders should carefully consider building any additional evaluation and learning upon the systems which are already in place. That is much less disruptive for the grantee, it can fortify the systems which are already being helpful, and it may well be the most effective way of getting the facts and insights the funder wants.
Second, as organizations decide whom to involve in the assessment, they should begin by revisiting the issue of exactly what they are trying to accomplish, and then decide what kind of people could bring the needed skills and perspectives. Groups may be surprised and realize that they do not need or want a “professional evaluator.” The first question organizations need to ask is: what is to be learned from this evaluation? This should be followed by the question of whether an internal or external person may be the best individual to take on this work.
External professional evaluators bring the advantages of distance, experience evaluating other situations, and extensive knowledge of methodologies which may be helpful. However, they may also face substantial barriers, especially if they lack previous experience working in similar settings, with groups facing similar challenges. Without that background they may find it impossible to create the working relationships and trust they need to get full cooperation and access to data. Furthermore, their approach to evaluation may not work well in these settings. If their methodology is highly quantitative or otherwise geared to assess massive public programs or large institutions, it may not fit in the constantly changing world which is standard for smaller organizations working in neighborhoods or on broader policy issues. In those arenas there is a tremendous need for innovation, trial and error, and rapid changes in strategy to seize new opportunities or avoid unexpected roadblocks. The most effective organizations are nimble and flexible. The main indices of real progress relate more to issues of capacity, power, and influence than simply whether a project had the specific quantifiable results which were originally predicted.
Professional evaluators who understand these dynamics and gear evaluations to their reality can be great assets to donors and grantees. While this combination is rare, some evaluators at universities, nonprofits or consulting firms have these skills and experience with participatory evaluations. Some also are experienced working with community groups on participatory research projects which involve community leaders in analyzing community and public policy issues.
Sign up for our free newsletters
Subscribe to NPQ's newsletters to have our top stories delivered directly to your inbox.
By signing up, you agree to our privacy policy and terms of use, and to receive messages from NPQ and our partners.
A variation on a professional external evaluator is the Participant Observer, who can bring remarkable insight into the inner workings of community groups and programs. Books such as John Fish’s classic Black Power, White Control, Paul Osterman’s insightful study Growing Power, and the revealing comparative study Faith in Action by Richard L. Wood provide examples of how an outsider who works closely with an organization over an extended period can develop in-depth knowledge and perspective, and then draw a vivid portrait of organizational life and the lessons which can be learned from an organization’s experience. For some organizations this approach is an excellent way to bring an evaluative eye to the issues they most want to explore. Overall, however, grassroots organizations should be aware that there is a great shortage of people with experience and skills as participant observers, which is part of the overall scarcity of experts in the area of participatory evaluation. This field narrows even more if you want someone who has evaluated similar projects and organizations—ones addressing some aspect of neighborhood change or community development.
Regardless of whether an organization uses an internal or external evaluator, anyone engaged with nonprofit evaluation should understand equally well the roles which organizing networks, technical assistance groups, organizational development consultants, and other learning partners may play in helping certain kinds of organizations with assessment and learning. Although these groups are not likely to think of their work as “evaluation,” they are in fact learning partners for the grassroots organizations. In different ways, they constantly assess the groups so they can help them strengthen their organizations and their work on issues and projects. Staff people working for groups which belong to organizing networks like the Industrial Areas Foundation or ACORN, for example, are supervised by the network. Therefore, network staff evaluate, train and help them on issues and organizational development questions.
Groups receiving organizational development help from consultants or technical assistance organizations receive similar regular feedback and advice on their operations and impact.
Support organizations and coaches, however, face one major difficulty as evaluators. If they are already committed to an organization or project, they must avoid being caught in the middle between that commitment and a funder’s desire to get an objective, perhaps tough assessment of the group or project. This conflict in roles can present a serious impediment if the evaluation’s main purpose is accountability and judgment. There is, however, no inherent role conflict if the funder’s main purpose in commissioning the evaluation is to foster learning or build the nonprofit’s capacity.
Some grassroots organizations turn to peers for help in assessing their work and exploring what improvements they might introduce. They see great advantages in having people whom they trust and who have “been in their shoes” take a serious look at their operations and give them honest feedback on what they think could be strengthened, what problems are emerging and need attention, and what activities should be expanded or rethought. Like support organizations and consultants that work extensively in similar communities, peers can bring great practical insights and knowledge to the task of assessment. These learning partners offer another advantage as well. They can bring “added value” to their assessment by drawing from their own experience and knowledge of how other grassroots groups have addressed the community issues and organizational dilemmas the group faces. However, peers should not be placed in the middle between a funder and a grantee. As with support organizations and coaches, there must be a clear understanding involving all parties concerning the uses of the peer review, the peer reviewers’ role, the issues they are to address, what they will keep confidential and what they will share with funders.
Organizations using peer learning and evaluation strategies meet regularly with peers, either informally or as a formalized peer learning group or learning circle, to learn from and support each other. This cross-fertilization of ideas exposes each group to ways other groups have tackled an issue they are grappling with, thus stimulating learning and creativity. Such peer learning also fosters self-assessment by the participants as they evaluate other groups’ ideas and strategies against their own. It is very common for these peer learning strategies to persuade an organization to change in significant ways.
A final alternative is self-assessment. This approach pushes the nonprofit to reflect continually on its work and the lessons which flow from it. The reflection centers on such issues as: What are the most important goals we are trying to accomplish? How can we best measure our progress? What indicators are most useful, and what are the best sources for that information? How can we judge what is going well and what isn’t? How can we best learn how to increase our level of success? How can we learn about other approaches which might work better? How should we prepare ourselves organizationally to carry out this continuing reflection, learning, and planning process? What help do we need in doing this? How have you changed your programs or operations based on what you are learning? How do you plan to communicate these results?
All these approaches—self-assessments, assessments by peers and partners, assessments by evaluators who use participatory methods, and the use of technical assistance networks—offer great advantages for the grassroots organizations that are tackling many of the nation’s most difficult challenges. They are designed to help groups learn, adapt, and strengthen themselves organizationally. They fit naturally with the organizations’ own priorities and learning processes, and thus avoid or at least limit the tensions, lack of candor, and perceived lack of relevance and value which often afflict external evaluations that are designed without attention to the organization’s needs and processes.
These and other participatory approaches to evaluation are usually overlooked in the U.S. However, they are more commonly accepted as important evaluation strategies among international nonprofit organizations where years of pioneering have led to growing sophistication in using participatory monitoring and assessment techniques and linking evaluation with organizational development. Properly structured they can result in assessments which are based on relationships of greater candor and increased access to the experience and insights of the people most involved in the work being evaluated. Furthermore, unlike traditional evaluations, these learning partnerships also usually result in stronger organizations, more effective programs and issue work, and greater impact—the ultimate goals shared by all funders and grantees.