Many nonprofits used to be able to operate in dark corners like the small purviews of a particular city or region. That is, they would not necessarily be judged against peer programs in other cities nor, necessarily, against a generalized assumption about what constituted good practice. Local observation, reputation, and all important relationships were first and foremost in getting and keeping funding.

These things are still important but there is increasing commitment to applying metrics to virtually every nonprofit receiving government or private institutional funding . . . and then you have the social entrepreneurs fiddling around with the use of dashboards and one thing and another. Because much of this pressure for evaluation has been external, the results can often feel like an unwelcome tumor. Nonprofits have tried to make the best of this situation—and many have done a fabulous job—but there is a healthy skepticism among many nonprofits about the value and effectiveness of imposed evaluation.

And then there is the realm of academic research on the social issues or policy matters nonprofits address. Much of this research can feel irrelevant, useless, even misleading and harmful to nonprofits.

These criticisms do not completely hold water with NPQ. We believe that integrating research and evaluation into your organization’s way of working is critical to its effectiveness and if you don’t want it imposed upon you in weird and inadequate ways, you better be serious about owning those processes yourselves.

This is not an impossible task. As the articles in this issue show, there are time-tested approaches which allow even small nonprofits to own their own evaluative processes. This must start with a recognition of research and evaluation as core to the organization’s development as the best possible solution to the problem it has chosen to address.

These approaches are not about bean counting but about fully deploying the intelligence and energy of participants in the organization’s work. They fully recognize the complexity and dynamism of the environments in which nonprofits work, and help nonprofits adapt quickly to changes in the landscape and capitalize on new opportunities. They see a particular nonprofit’s work in the context of a world with many actors, making it difficult to gauge the impact of each, especially if they sometimes work in concert with each other. While some organizations apply the rigors of methodology and distance to some of their evaluation and research work, they also integrate the gathering of stories, ideas, and definition of problems from their constituencies. The best work with their constituencies to pose questions, set goals and gather information needed for an evaluation or research project.

We now see too little research proposed and implemented by nonprofits, yet it can be very powerful simply because it takes up problems as they are understood by our own communities.

When you examine a well-run nonprofit and how it operates, you often find that the organization pays attention to and actively engages in research. Community organizing groups, for example, frequently involve their volunteer leaders as well as staffs in conducting sophisticated research on the issues they are tackling, seeing this as part of leadership development as well as good planning. Some nonprofits are pioneering in “citizen monitoring”—involving their leaders in researching how well a particular policy or agency is working. They monitor the extent to which there is compliance with the law, an open and participatory decision-making process, and adherence to the community’s priorities. Such efforts often include a “power analysis” in which people from the nonprofit assess how decisions are made, which decision-makers share their views and which oppose them, how powerful they are, and what opportunities there are for the group to develop influential allies and intervene to influence public policy or an institution’s decisions.

Thus an increasing number of nonprofits are serious about evaluation and research though they frequently use different terminology (like “learning” or “planning”) to describe it. There are financial considerations for having this type of capacity and there are two approaches to creating it. One approach is to figure program costs to include research and evaluation. Another approach is to create “organizational slack.” Organizational slack refers to money that is flexible. This means, in a nonprofit, that it is unrestricted by the source and not needed to cover the cost of fulfilling basic commitments. Such slack can be used for research and development and for the systems upgrades that allow for data to be collected and tracked more effectively among other things. Organizations that are serious about evaluation and research mix and match these two financing methods. Grants or contributions specific to one learning moment are less sustainable but can also be mixed in.

Nonprofits can benefit greatly from partnerships with people with deep expertise in the areas of research, evaluation, and organizational learning and development, particularly if those people approach their work in a way that builds ongoing internal capacity. This means that evaluators must call out the strength and passion for excellence embedded in nonprofits, linking the excited curiosity that flows from commitment to mission to doable learning systems. It means working in partnership with nonprofits to build their systematic and creative approaches to learning. This is not arms length work. It is deeply developmental and therefore challenging for the organization and the people who work in it—and for the evaluation consultant.

But, as you will note in the Youth Villages case (page 16) from time to time greater depth or distance or other methodologies in research or evaluation are needed to spark improvement. Researchers and practitioners alike must do a better job of working together to make sure that research is helpful to practice in the social sphere.

As usual we reached far and wide in researching this edition. We would particularly like to thank Andrew Mott, director of the Community Learning Project, who contributed to this edition as guest editor.

Finally, we wanted to call your attention to the first of NPQ’s commissioned investigative articles. “Fides: Faith and Money in the Bush Administration” by Rick Cohen looks at the political maneuverings in and around the federal faith-based initiative. The article was supported in part by a grant from the Fund for Investigative Journalism and it is the first in what we anticipate will be a regular series. Please send us your ideas for investigative pieces. You are our eyes and ears!