Editor’s Note: This piece is part of our ongoing Equity, Diversity, and Inclusion (EDI) Project created to spotlight millennials’ voices and thoughts on diversity and justice. We urge you to read how this project came together in collaboration between NPQ and the Young Nonprofit Professionals Network and about the ideology behind this series. We intend to publish another 14 pieces in the coming months. Readers will be able to subscribe to an RSS feed to follow articles as they are published. NPQ and YNPN will be using the hashtag #EDISeries, so post about the series along with us.
Research and evaluation have long influenced activities in the nonprofit sector. Nonprofit organizations are increasingly establishing or being asked to establish metrics or to conduct evaluations for a variety of reasons. Indeed, research, evaluation, and data regularly inform philanthropic and policy decisions, and vice versa.1 We live in a data-driven society that has furthermore become obsessed with big data, a term that describes the ability to collect and analyze data on every participant or even every transaction. Thus, instead of periodically surveying or interviewing a sample of a nonprofit’s members or employees, organizations can now create and use databases that measure every contact, activity, donation, or half hour of an employee’s time.
Organizations that use big data have an unprecedented ability to understand whole populations, track and assess particular individuals, and develop strategies that influence behavior.2 The use of research and evaluation so broadly and the emergence of such data interest require us not only to think about the velocity, complexity, variability, and variety of data but also about what we do with the data, because big data does not always translate to better data.3 There are also, of course, ethical considerations vis-à-vis how we gather data—“big” or otherwise—and what use we put them to.
Referring to the now routine requests by funders, boards, and public agencies for outcome measurements and for organizations to become more evidence driven, the Social Innovation Fund’s former director, Michael D. Smith, explained in a post that in fact, for many federal agencies, “Evidence = Funding.”4 As much as certain buzzwords can make us want to poke our own eyes out, it’s what we’re all here for, right? To make an “impact?” As the Innovation Network’s codirectors wrote, “In the face of ever growing need, funders and nonprofits need to use every tool at their disposal to maximize impact. Doing good isn’t enough. We need to do ever better. […] Evaluation is an often undervalued, overlooked tool for improving outcomes and maximizing impact.”5
Although many of us researchers are self-described “data geeks” who could not be more excited about the data revolution, the increase in both supply of and demand for data and the meaning we seek in data heighten the necessity for research and evaluation to be a locus of critical thinking, especially with respect to the largely unacknowledged cultural biases undergirding how data are gathered, analyzed, and interpreted. Indeed, our experience suggests that there is important information about equity, diversity, and inclusion embedded in research and evaluation processes.
We come to the nonprofit sector as researchers dedicated to leveraging data (defined broadly) for creating more equitable, diverse, and inclusive communities. We seek to use research practices that reinforce the goals and values embedded in our work. At the time this piece was written, we were part of a working group, at the Jonathan M. Tisch College of Civic Life at Tufts University, dedicated to equitable and inclusive research in two research groups: the Center for Information and Research on Civic Learning and Engagement (CIRCLE) and the Institute for Democracy and Higher Education (IDHE). We recognize that the topic of equity, diversity, and inclusion in research and evaluation is not, of course, a new conversation, but it is a timely one that needs more attention. In this article we raise questions around equity, diversity, and inclusion that we have found valuable when making decisions around research and evaluation, and we hope with these questions to build on and perhaps broaden the conversations already in play on this topic.
Considerations and Questions to Keep in Mind During the Research and Evaluation Process
The research process involves multiple nonexclusive components, ranging from constructing a research question and securing funding to collecting data, analyzing data, and disseminating the information. Each step in the process must fit together to contribute to a greater goal or purpose. Below are select questions that researchers should keep in mind during each step of the research process.
Research Design
How do we deal with the “data–resource paradox”? In a perfect world with unlimited resources, the first step in the research process is often the construction of a question that aligns with an organization’s mission, vision, or theory of change. However, far too often there are resource constraints on organizations—they may have limited or restricted funds for research and evaluation, or specific requests around what is done; they may have a certain obligation to a donor, or a potential grant or funding source that influences the direction of the research. Some of this is what Vu Le calls the data–resource paradox—the idea that, as he writes, “If an organization does not have resources to collect data, then it does not have the data to collect resources.”6 Thus, for many organizations, resource constraints not only can influence the direction of research or evaluation—i.e., drive the design of the research question itself—but also influence an organization’s capacity to ask and answer critical questions about its work in the first place.
Who is involved in the research process? Integral to the discipline of research and evaluation are the accuracy and reliability of information. As a result, we as researchers must ask ourselves: Which research design will contribute to authentic representation? Are questions being asked in aid of authentically gathering information and highlighting the operative dynamics in play? Who may be needed to help with a design, and will that person have the capacity and be willing to contribute? In some qualitative—and all participatory—research, the research involves study participants as co-collaborators and experts on a project. Nonprofit sector research and evaluation influences policy, funding, and programmatic decision making, and as such, decisions need to be informed by representative voices of the appropriate stakeholders regarding what is happening in a particular context.
What are the implications, vis-à-vis inclusiveness, of theoretical frameworks? Researchers approach topics differently based on the lenses through which they look at data. These lenses can be influenced by educational, professional, and other lived experiences, and can shape all aspects of the research design. The theoretical and conceptual frameworks through which one approaches research or evaluation can also have implications on the data collected and the meaning drawn out. According to the American Evaluation Association:
Evaluations cannot be culture free. Those who engage in evaluation do so from perspectives that reflect their values, their ways of viewing the world, and their culture. Culture shapes the ways in which evaluation questions are conceptualized, which in turn influence what data are collected, how the data will be collected and analyzed, and how data are interpreted.7
For instance, one way to approach evaluation is for the evaluator to enter with a predetermined framework, metrics, or evaluation design that does not necessarily match an organization’s theory of change or consider the strengths of the organization. This kind of evaluation does not embrace inclusion because it lacks recognition of organizational priorities and different perspectives in localized contexts. In contrast, another way to approach evaluation is through a culturally responsive evaluation framework, which both recognizes that culturally defined values and beliefs lie at the heart of any evaluation effort and challenges evaluators to reflect on power dynamics and sharpen their attention to social justice during each step of the evaluation process.8 This approach embraces the understanding that questions of power must be addressed at each stage of the research process. Moreover, by thinking of evaluation as an “exercise in social justice,” some may begin to see evaluation not as a burden on an organization but instead as a tool to assess the distribution of wealth, opportunities, and privileges in the nonprofit sector and beyond.9 Evaluators have to be conscious of unsaid and de facto power dynamics involved in the evaluation process and what is being evaluated.
Data Collection and Analysis
How do we ensure that when we look at data we are not making incorrect judgments? Data are political: research and data are influenced by those who help to construct and analyze them. And, given the rise in and availability of data, we can take just about anything and correlate it.10 So, the person or group responsible for making judgments on which piece of data is accurate or right for a given group holds a lot of power. Jeffrey Alan Johnson argues that data do not just assign material values but also moral values, and furthermore, he argues, “It is difficult, often, to see the political structure of data, because data maintains a veneer of scientistic objectivity that protects it from challenge.” All of which, of course, can be used to influence political control.11 Of equal importance is Stuart Hall’s notion of data as a “cultural product”—the suggestion that data encoding and decoding processes are never neutral but rather are built around a “dominant social order” designed to “impose [a society’s] classifications of the social and cultural and political world” through the creation of “preferred meanings.”12
Sign up for our free newsletters
Subscribe to NPQ's newsletters to have our top stories delivered directly to your inbox.
By signing up, you agree to our privacy policy and terms of use, and to receive messages from NPQ and our partners.
This is particularly relevant when deciding on a unit of analysis. Let’s take youth civic engagement as an example. Aggregate national trends about current youth tell a story that shows both increasing and decreasing opportunities to be engaged, but is that enough information? In this case, we can absolutely say it is not, since trends differ by young people’s background and experiences.13 In 2013, Race Forward released a unique and powerful set of recommendations for those involved with research and analysis in relation to the Asian American, Native Hawaiian, and Pacific Islander communities.14 One of the recommendations was to disaggregate data in order to understand underlying dynamics and difference within this large group. Unfortunately, the data–resource paradox comes into play here, because not all organizations have the resources to collect enough information to allow for authentic disaggregation. When organizations do not have the resources to disaggregate data—or the resources to oversample, or time to do the qualitative work to unpack survey data, or the people to gauge the meaning ascribed to particular data—this can have severe implications for our policy, programmatic, and funding recommendations.
Moreover, researchers and evaluators across a variety of fields and disciplines all have different ideas of what constitutes “good,” or “valid,” data—especially when compared to predominant, positivist academic definitions. Take popular data on race relations in the United States as an example. In the book White Logic, White Methods: Racism and Methodology, Gianpaolo Baiocchi and Eduardo Bonilla-Silva critique popular surveys of whites on race relations as having socially desirable answers, while the decisions people make more privately in real life can in fact differ greatly; hence, the authors’ call for more helpful, in-depth research.15 These findings also suggest a need for close reflection on the ultimate goals of the research questions that we ask, how we answer them, and the recommendations we provide using the data.
Dissemination and Utilization
There seems to be something fundamentally disjointed about the large amount of resources invested in research and evaluation compared to the smaller amount of resources devoted to ensuring that research findings and recommendations are accessible and used. Utilization-focused evaluation models within the evaluation world—based on the principle that an evaluation should be judged on its usefulness to its intended users—provide a great framework to think about building a project with the end in mind.16 If researchers and evaluators rely only on one, traditional communication format or channel or make assumptions about access, then the two-way arrows needed between policy, practice, and data/evidence are weak.
How do we draw conclusions and make recommendations? A core part of dissemination and utilization is giving thought to, or creating a process for, discussing potential implications of research and evaluation. This is another place in the research and evaluation process where it matters who is at the table. Being inclusive with regard to who gets to speak about the findings and implications may very well bring more rigor to the process.
How can we be more inclusive in our dissemination and discussion of findings? Those who have financial resources can afford the time and capacity to develop, access, and reflect on research findings and recommendations regardless of location or format. There are at least two important angles for thinking about accessibility: (1) ensuring that all people, not just those privileged financially or otherwise, can locate and view the information, and (2) whether it is generally comprehensible to people outside of a strictly academic/scientific world.
It takes time, strategic thinking, and relationships to ensure that a range of potential users knows about and can access data, research, and evaluation. While there is much interest in the growing trends around infographics and data visualization because of their ability to break information down in a digestible and accessible way, some information is still behind the locked door of academic journals and publications. The public purposes of higher education are sometimes at odds with the predominant model for tenure and promotion, which is to write for prominent journals only accessible by those with access—i.e., the writers and their exclusive audiences. Open-access journals are a significant step but are not enough. Last year, the World Bank publicly reflected on the lack of downloads among its hundreds of reports.17 The Monkey Cage blog, hosted by the Washington Post, is one interesting example of a public forum for academic research on politics with implications for policy and practice. Equitable dissemination means investing time in providing a variety of publicly available formats, contextualizing research, and listening to feedback on the analysis and conclusions drawn.
How We Try to Honor and Recognize These Questions in Our Work
For over two years, our research team at the Tisch College of Civic Life at Tufts University has made space for and devoted time to thinking about the role of equity, diversity, and inclusion within the context of our research. This has included conversations about what we as individuals and a team bring to the research and what gaps exist relevant to doing rigorous, thoughtful research. What follows are two strategies that have been valuable to our ongoing journey toward creating more equitable, diverse, and inclusive research practices.
Welcoming Language on our Website
Our research team regularly sees youth organizations doing powerful and important work, yet they do not always have the funding to properly document and collect information about their impact (i.e., conducting evaluations and hearing from members). In other words, we have a data–resource paradox in play here. CIRCLE serves as a resource for youth organizations regardless of their capacity to fund a formal contract, and we don’t want to limit our network and resources to just the organizations we communicate with on a regular basis. We needed to make sure that our information was readily available to those who do not have a chance to directly speak with us, and to that end we changed the language on the CIRCLE website to more clearly reflect these commitments.18 The organization is also currently working to increase and improve the accessibility of its online resources.
Developing an Equity, Diversity, and Inclusion Research Tool
Our EDI working group developed a tool to build a team culture where equity, diversity, and inclusion are embedded in every part of our research and evaluation projects. The goal of this is to develop a culture and set of practices within Tisch College’s research team, using a guide or point of reflection, to ensure that all research keeps issues of diversity, equity, and inclusion in the foreground during each unique stage of a project. Developing the tool was a two-step process:
- First, the EDI working group designed an activity called “What is your research identity?” Colleagues were asked to reflect on their past experiences (educational, professional, and lived) and consider how these experiences shaped the approaches, assumptions, and frames they brought to bear on their current role in Tisch College research. Then, we asked our colleagues to think about research holistically, asking that they reflect on the whole research process—including developing research questions, data collection, analysis, and dissemination—with questions such as the following in mind: Are there certain questions to which you are drawn or certain kinds of areas you like to explore? Thinking about the stage(s) of research in which you are involved, what kinds of tools, methods, and processes are you most comfortable with or use frequently? Do you find yourself gravitating toward one part of the research process over another? Finally, we asked that they break into small groups, and that group members share their individual reflections on the team’s work, guided by the following questions: Do any assumptions and biases come up that we need to remain on top of and regularly remind ourselves of? Do we have blind spots that we need to be aware of? What else do we need to be reminded of? The conversations from this initial activity informed the development of a draft EDI research tool by the working group.
- After providing a draft of the tool to the Tisch College research team, team members took an anonymous survey about the EDI research tool’s content and ideas for implementation. Their feedback was then added to the current draft of the proposed tool. Sample survey questions included the following: What perspectives or values are prioritized within the research design? Does that help or hinder the research? Are there biases or embedded privileges in the data collection instruments, procedures, or actual data? How do we message our findings? Are any helpful or hurtful assumptions built into our main messages?
That brings us to where we are today in this iterative process. Our internal research “seminars”—where Tisch College staff members discuss topics ranging from intersectionality to critical race theory to social networks—provide opportunities for discussion and development of areas we want to keep up with or learn more about. We are also continuing to modify the tool based on the feedback we received initially as well as feedback from continued use. We will be using regular strategy meetings to discuss use of the tool, the results, and the challenges we still face, as described below.
Moving Forward: The Challenges We Still Face
We do not have all of the answers to the questions around conducting and using research and data discussed in this article, and there are bound to be multiple ones from multiple perspectives. Some current questions we still have include the following:
- How can we create a data-positive culture in our organizations that have limited resources while also keeping equity, diversity, and inclusion in mind? What is the impact of the nonprofit industrial complex on data for under-resourced organizations?
- How can we keep equity, diversity, and inclusion central to our research when a funding organization may not be on the same page? Can a contracted evaluator act as a mediator between funders and grantees?
- What else can be done to discuss and embrace diversity as a core component of rigor?
Of course, it is a given that a process like the one described here will continue to evolve, and there will always be more questions. We’ve been grateful for research- and evaluation-focused organizations pushing this conversation among members and the sector. We hope that this article provides food for thought to those who engage with all forms of data, and that it will help individuals and organizations of all kinds take action to make diversity, equity, and inclusion a core component of their work.
Notes
- We recognize that there can be distinctions between research and evaluation. We view research and evaluation as having valuable local and generalized application to the nonprofit sector.
- Gary King, “Restructuring the Social Sciences: Reflections from Harvard’s Institute for Quantitative Social Science,” American Political Science Association, January 2014.
- For an example, see Eszter Hargittai, “Is Bigger Always Better? Potential Biases of Big Data Derived from Social Network Sites,” The Annals of the American Academy of Political and Social Science 659, no. 1 (2015): 63–76.
- Michael D. Smith, “Evidence = Funding: A new blueprint for evaluation plans,” Nonprofit Technology Network, October 8, 2014.
- State of Evaluation 2012: Evaluation Practice and Capacity in the Nonprofit Sector (Innovation Network, 2012).
- Vu Le, “Weaponized data: How the obsession with data has been hurting marginalized communities,” Nonprofit with Balls, May 26, 2015.
- “American Evaluation Association Statement on Cultural Competence in Evaluation,” American Evaluation Association, April 22, 2011.
- Rodney Hopson, “Rodney Hopson on Culturally Responsive Evaluation,” AEA365 (blog).
- John Ridings, “LAWG Week: John Ridings on Social Justice and Nonprofit Evaluation,” AEA365 (blog).
- For examples of how simple it is to make correlations between completely arbitrary and unconnected events simply by mapping their random, coinciding data, see Taylor Vigen, Spurious Correlations.
- Jeffrey Alan Johnson, “How data does political things: The processes of encoding and decoding data are never neutral,” London School of Economics and Political Science (blog), October 27, 2015.
- Stuart Hall, “Encoding/Decoding,” in Meenakshi Gigi Durham and Douglas M. Kellner, eds., Media and Cultural Studies: KeyWorks, Ed. (Oxford: Blackwell, 2006).
- Examples of research on this topic include: Constance Flanagan, Peter Levine, and Richard Settersten, “Civic Engagement and the Changing Transition to Adulthood,” Fact Sheet, The Center for Information and Research on Civic Learning and Engagement (CIRCLE), Jonathan M. Tisch College of Citizenship and Public Service, Tufts University (2009); “Diverse Electorate: A deeper look into the Millennial Vote,” Fact Sheet, CIRCLE (2012), Jonathan M. Tisch College of Citizenship and Public Service, Tufts University (2012); and All Together Now: Collaboration and Innovation for Youth Civic Engagement (The Center for Information and Research on Civic Learning and Engagement [CIRCLE], Jonathan M. Tisch College of Citizenship and Public Service, Tufts University, 2013).
- Leading Racial Justice Organizations Release Guide, Data Repository on Asian Americans, Native Hawaiians, and Pacific Islanders (Race Forward: The Center for Racial Justice Innovation, July 2013).
- Eduardo Bonilla-Silva and Gianpaolo Baiocchi, “Anything but Racism: How Sociologists Limit the Significance of Racism,” in White Logic, White Methods: Racism and Methodology, Tukufu Zuberi and Eduardo Bonilla-Silva, eds. (New York: Rowman & Littlefield Publishers, Inc., 2008), 137–51.
- The website of Better Evaluation, “Utilization-Focused Evaluation,” accessed October 13, 2016.
- Christopher Ingraham, “The solutions to all our problems may be buried in PDFs that nobody reads,” Washington Post, May 8, 2014.
- The website of Civic Learning and Engagement (CIRCLE) website, CIRCLE’s Mission, accessed October 13, 2016.