Raul Lieberwirth / High on a Wire

This article comes from the fall 2018 edition of the Nonprofit Quarterly.


Recently, a former graduate student of mine contacted me. She said that she was struggling to figure out how her research could go “beyond the academic bubble” and be translated into policy. I suggested that she consider a “pracademic” approach, which would allow her to blend her academic skills and knowledge with public policy practice. In the discussions that followed, I recalled being in her shoes several years earlier until I transitioned out of academia and into social justice nonprofit research.

During my ten years in academia—first as a doctoral student in social-personality psychology and then as an assistant professor in psychology at a liberal arts college—I knew that I wanted to leverage my research skills and training to create social and systems change that promotes equity and inclusion. But I was unsure how to translate research into practice.

At that point, I had known only the academy. I was also starting to suspect that the research I was carrying out and publishing might not be having a direct impact on the lives of those I most wished to support—particularly, the lesbian, gay, bisexual, transgender, and queer (LGBTQ) communities, and communities of color.

However, while teaching at Tougaloo College in Mississippi, I was approached by Dr. DeMarc Hickson, the chief operations officer at My Brother’s Keeper, a nonprofit that seeks to eliminate health disparities through research, evaluation, environment, and policy change. Since my research on HIV sexual risk behaviors in Black men who have sex with men (MSM) was similar to Dr. Hickson’s work, he suggested that I apply for a research position at his organization.

Thus began my transition out of academia and into social justice nonprofit research. For the next three years, I worked for My Brother’s Keeper as a research scientist. I then worked for the Building Movement Project, a nonprofit that seeks to advance social change through research on leadership, social services, and movement building. Currently, I work at GLSEN (formerly the Gay, Lesbian & Straight Education Network), a nonprofit that envisions a world in which all students have a safe and supportive school environment—regardless of their sexual orientation, gender identity, or gender expression—and works toward this through research, education, policy, and advocacy.

 

Through my academic training and my work in these three organizations, I have encountered and continue to explore pracademic ways to resolve a range of issues. Among these are: conducting research through a critical lens; being embedded in, and attentive to the needs of (and hearing the voices of), the communities being studied; and conducting research that is methodologically rigorous yet still practical and accessible.

 

Critical Issues for the Pracademic in the Field

Conducting Research through a Critical Lens

In both academia and nonprofits, it is important for social justice researchers to conduct research through a critical lens, because the theoretical frameworks that are used have implications for data collection and interpretation of findings. Research is not culture-free, as researchers bring their values, ways of viewing the world, and culture to their work. Culture influences the questions that researchers ask, which then influences what data are collected and how findings are interpreted. Jodi Benenson of the University of Nebraska, Omaha, and Abby Kiesa of Tufts discuss the importance of using a culturally responsive theoretical framework in social justice research, whose aim is to help create more equitable, diverse, and inclusive communities:

Approaching research through a culturally responsive theoretical framework recognizes that culturally defined values and beliefs lie at the heart of social justice research, and challenges researchers to reflect on power dynamics and sharpen their attention to social justice during each step of the research process.1

At My Brother’s Keeper and Building Movement Project, it was important that I use a critical eye and possess a deep understanding of lived experiences within communities of color—such as facing multiple intersecting forms of oppression (e.g., racism, homophobia, classism)—whether my research involved identifying barriers to condom use for Black men or the obstacles people of color face in taking on CEO and executive director positions. Maintaining a culturally grounded understanding of the challenges facing the communities being studied is what makes possible the development of nuanced data collection instruments (e.g., survey questionnaires, interview guides, focus group guides) that can provide deeper insights into community needs. These insights ultimately allow for better policy recommendations. In both research examples, a simple and cursory look at data sets without cultural context would have resulted in an incomplete understanding of the big picture, with potentially misleading—or even disastrous—results.

A standard “objective” approach to assess program effectiveness would have been to conduct a randomized control group study. For this type of approach, if the participant’s group average on a measurable outcome was “significantly” higher than the nonparticipant group’s average, then the intervention would be considered successful. A key challenge with randomized control studies is the lack of real world-ness in the testing. The process is so prescribed that it often leads to findings (or a program) that do not reflect real life. Using a randomized control group study as an HIV intervention approach for Black MSM in Jackson, Mississippi, would have led to misguided interpretations of findings. We may find “statistically” significant differences between the Black MSM who participated in the intervention compared to those who did not. However, these differences may be small and not meaningful with regard to change in sexual risk behaviors and attitudes toward condom use. In other words, findings and statistical significance need to be contextualized. A “marginal” finding may be of major importance, and a “significant” finding may turn out to have limited applicability for practice.

At My Brother’s Keeper, we started by gathering input from our community advisory board—whose members came from both the Black MSM community and from those who directly worked with Black MSM (e.g., HIV prevention services)—on the Centers for Disease Control and Prevention (CDC) evidence-based interventions (EBIs) that we used. Having an informed cultural understanding of the Black MSM Jackson community before we began allowed us to refine and add new topics to the EBIs that meet community needs. This was an iterative process, as our community advisory board continued to provide feedback throughout the intervention program.

Being Embedded In, and Attentive to, the Needs of the Communities Being Studied

The illusion of objectivity and value-free science in academia poses the risk of actively harming through research the very groups academia is trying to help. There are many examples in history in which academia arrived at allegedly “objective” findings that turned out to have been unethical and gone awry. Indeed, as NPQ readers know well, such examples have been all too common among low-income communities and communities of color. Among the most notorious of these is the Tuskegee syphilis study, which condemned Black syphilis sufferers to preventable and painful deaths in the name of “science,” and still stands as a horrific example.2

Another, less-known example comes to us from the HIV/AIDS epidemic in the early 1980s. During this time, the National Institutes of Health (NIH) conducted research into antiretroviral drugs, ultimately testing medications with results ranging from ineffective to downright toxic. Worse yet, as Andrea Anderson in “Demonstrating Discontent, May 21, 1990,” describes it, “there was a dearth of treatments for opportunistic infections, not to mention concerns over funding, opaque clinical trial protocols, and trial requirements that deterred participation and neglected women, minorities, and injection drug users”—the very people who were most at risk for contracting HIV.3

 

If there was anything to be gained from this debacle, one could argue that it was the birth of important social justice nonprofit movement groups such as ACT UP (AIDS Coalition to Unleash Power), which insisted that drug trials be opened up to the very communities that would be impacted by trial results. With a great sense of urgency, ACT UP aggressively and effectively lobbied for desperately needed funding—ultimately succeeding in winning national support in the process. The legacy? Community advisory boards staffed with leaders from affected communities now weigh in on all aspects of drug trials. Also, institutional review board (IRB) reviews of human subjects’ research designs are now standard in the field.

The record of social justice nonprofits is not perfect, but the best ones ensure that representative voices from the communities studied are directly involved in the design of their research. For instance, My Brother’s Keeper embeds its research and practice in core values of health and social justice, where participants from the communities being studied are co-collaborators on the studies and are involved in the entire research process—from study design to publication of reports. As Benenson and Kiesa emphasize, having authentic community representation oversee nonprofit research is critical, because “nonprofit sector research…influences policy, funding, and programmatic decision making, and as such, decisions need to be informed by representative voices of the appropriate stakeholders regarding what is happening in a particular context.”4

A growing number of researchers have committed to collaborative community-based participatory research (CBPR) to prevent exploitation of vulnerable populations. CBPR is a collaborative approach whereby affected communities codesign the research process. By focusing on and investing in the needs of the communities being studied, communities can help prevent exploitation in advance by making sure that the research is truly aligned with their needs.

Conducting Research That Is Methodologically Rigorous Yet Still Practical and Accessible.

In a Chronicle of Philanthropy article, Phil Buchanan of the Center for Effective Philanthropy states that while the past decade has seen nonprofit sector research expand, much of it has not been as rigorous as it needs to be. As Buchanan wrote:

This matters because nonprofit leaders are looking at what is published to inform—and change—their practices.5

Buchanan points to five challenges: (1) a lack of rigor; (2) a tendency to stretch data to reach conclusions; (3) a failure to collect sufficient original data; (4) limited review of the research literature; and (5) a failure to indicate who funded the research. Academics are trained to conduct research in ways that avoid common data problems, such as biased sampling and ungeneralizable findings. As Buchanan has noted, research rigor is particularly important for nonprofit leaders, as many use research findings to develop and change their practices.

Academics are more likely to be trained in rigorous research methods, which may include being more equipped to conduct advanced statistical analytic procedures and being more knowledgeable about various sampling techniques. In a typical academic setting, one might be expected to explore sampling techniques that would minimize bias. Take, for instance, a survey study on HIV sexual risk behaviors in African-American MSM. When subgroups within the population of interest vary—such as socioeconomic status—one may want to employ stratified random sampling to ensure that each subgroup is sufficiently represented within the sample population. First, the researcher would divide a sample of Black MSM into subgroups by socioeconomic status, such as “upper class,” “upper-middle class,” “middle class,” “upper-lower class,” and “lower class.” To stratify this sample, the researcher would then randomly select proportional amounts of Black MSM from each socioeconomic status group. Employing stratified random sampling would ensure that Black MSM participants from each socioeconomic status group are included and equally represented in the final sample. In addition, advanced statistical techniques may be employed to provide more robust and accurate findings.

Yet, in a nonprofit setting, the traditional academic approach can be a limitation. Nonprofits are constrained by time and resources, and a nonprofit generally needs to reach an audience not steeped in statistical training. For that reason, r coefficients for correlations—and statistical significance, such as p values—are generally relegated to endnotes. In other words, nonprofits need concrete and efficient recommendations that can be easily understood and acted upon by nonprofit leaders, policymakers, and other stakeholders. Also, while there are some within the academy who promote community-based participatory research, my take on “traditional” (regarding the academy) is not seeking input from the communities of interest and doing research because the academy (alone) thinks that it is worthwhile.

Striking a balance between research rigor and usable research can be challenging, but it is necessary. In my own work, I find that reporting basic statistics (such as percentages) and using basic statistical procedures to test for significant differences (such as chi-squares to test for independence between two variables, regressions to test for correlation between variables, and analysis of variance [ANOVA]) provides sufficient rigor while keeping the work accessible to the nonprofit audience that I aim to reach.

For example, every two years, GLSEN conducts a National School Climate Survey, which assesses the school experience of LGBTQ youth.6 In our report, we primarily include percentages rather than group averages. For instance, we might compare the level of victimization based on sexual orientation for those who have access to LGBTQ-related school resources and supports (Gay-Straight Alliance clubs, LGBTQ-inclusive curricula, supportive educators and administration, and comprehensive anti-bullying/harassment policies) to that of LGBTQ students who do not have access to such resources. We may assess mean differences for rigor, but we also provide more accessible percentages in the report—sometimes also needing to conduct an additional test to ensure that those percentages are also statistically significant through, for example, a chi-square test of independence. In this way, we are doing additional analyses to maintain both rigor and accessibility. So, sometimes this kind of work is actually more complex in terms of analysis than its academic counterpart.

Reporting percentages when comparing groups provides clear and usable information for our constituents on the ground, such as educators, students, and other school advocates. Organizing data this way also makes them more accessible for our local GLSEN chapter members (who use our research findings to advocate for LGBTQ-supportive resources in their schools) or for our policy department (which includes our findings in congressional briefings to advocate for policies that support LGBTQ youth in schools).

In nonprofits, research findings should be employed to help advance the organization’s mission and stated goals. In other words, research questions should be guided by what a nonprofit needs to know in order to effectively achieve its goals.

Then, there is the task of how to report the findings to a general audience.

In our National School Climate Survey report, we primarily use the chi-square test of independence and analysis of variance (ANOVA) or multivariate analysis of variance (MANOVA) to test, for instance, whether LGBTQ students who have access to LGBTQ-supportive resources experience significantly less victimization compared to LGBTQ students who do not have access to these resources—all of which we detail in the report’s notes. When appropriate, we also control for factors that may be related to both the existence of resources and the outcome of interest (e.g., victimization experiences), such as region and locale (urban, suburban, rural). These statistical tests address our organization’s stated goals.

Actionable and accessible research must also yield practical data. In an NPQ article, Elizabeth Castillo cites social-purpose consultant Debra Natenshon as saying, “The mechanisms for capturing the data will be more practical if the categories are designed by the end-users to meet their needs.”7 By “end-users,” Natenshon is referring to the community-based practitioners who should be driving and shaping the study design. For a study to be meaningful, the questions must address the needs of the communities being studied. For example, at GLSEN, our National School Climate Survey is developed in response to the needs of the organization and the LGBTQ school movement in general. The questions we ask are driven by policy advocates, community organizers, and practitioners who seek more effective responses to the challenges they face.

Conclusion: The Role of the Pracademic

Critical issues that pracademics must contend with in the field include bringing a critical lens to social justice research to avoid exploiting the communities being studied, and conducting research that is rigorous yet also practical and accessible. To reiterate, three ways that pracademics can help to bridge the best of academic and nonprofit research while avoiding pitfalls are:

  1. Conduct research through a critical lens to provide context for findings and statistically significant results within the community being studied.
  2. Ensure community members are present in both research design and evaluation.
  3. Balance research rigor with clear, nontechnical language to deliver meaningful, measurable, and sustainable results for community members.

Notes

  1. Rodney Hopson, “Rodney Hopson on Culturally Responsive Evaluation,” AEA365: A Tip-a-Day by and for Evaluators (blog), March 7, 2010; see also Jodi Benenson and Abby Kiesa, “Research and Evaluation in the Nonprofit Sector: Implications for Equity, Diversity, and Inclusion,” Nonprofit Quarterly, October 19, 2016.
  2. DeNeen L. Brown, “‘You’ve got bad blood’: The horror of the Tuskegee experiment,” Retropolis, Washington Post, May 16, 2017.
  3. Andrea Anderson, “Demonstrating Discontent, May 21, 1990,” The Scientist, July 17, 2017.
  4. Benenson and Kiesa, “Research and Evaluation in the Nonprofit Sector.”
  5. Phil Buchanan, “As Nonprofit ‘Research’ Proliferates, It Must Be Viewed With Healthy Skepticism,” Chronicle of Philanthropy, March 10, 2013.
  6. See, for example, Joseph G. Kosciw et al., The 2015 National School Climate Survey: The Experiences of Lesbian, Gay, Bisexual, Transgender, and Queer Youth in Our Nation’s Schools (New York: GLSEN, 2016).
  7. Elizabeth A. Castillo, “Are We There Yet? A Conversation on Performance Measures in the Third Sector,” Nonprofit Quarterly, December 8, 2015.