
Editors’ note: This piece is from Nonprofit Quarterly Magazine’s winter 2024 issue, “Health Justice in the Digital Age: Can We Harness AI for Good?”
Neuroscientists are increasingly harnessing artificial intelligence to advance their work.1 AI promises to help scientists leverage massive datasets and brain simulations to test new diagnoses and treatments at scale—without the need for risky or costly human participation.2 In this way and many others, AI could facilitate exponentially faster, and more significant, medical advances. But the shift away from manual processes in favor of automated intelligence needs greater scrutiny: AI is leveraging huge amounts of personal information through existing human datasets.3 By law, these must remain anonymous when used.4 In practice, that’s proven difficult—a systematic review of American healthcare data done in 2011 revealed high rates of re-identification, raising ethical concerns.5
This brings forth other pressing questions, such as: How are AI datasets acquired in the first place? Do patients know, let alone understand, that their data are being used by AI? Who ultimately controls the data, and to what ends? How are existing and emerging biases in research influencing AI’s future applications?
AI is transforming neuroscience, and addressing these issues is essential for any hopes of an ethical path forward.
Neuroscience, broadly, deals with the nervous system and the brain, including mental health. If applied thoughtfully, AI could reduce existing biases in that area; without diligence and oversight, however, AI-driven innovations will worsen the racial and economic inequities that prevail. Which path wins out will depend on the datasets and designs of the emerging technologies as well as whether or not robust regulations are put in place to guide the scientists at the helm. Neuroethics, a field that explores the moral and ethical implications of neuroscience,6 must rise to this new challenge as the United States ponders the potential risks and benefits.
AI’s Potential to Bridge Gaps in Mental Health Diagnosis and Treatment
Opportunity gaps in the diagnosis and treatment of mental illness render AI’s potential impact particularly promising when it comes to neuroscience and mental health.
African Americans diagnosed with depression often face a severely debilitating condition, major depressive disorder (MDD),7 that is conjectured to be due to delayed treatment.8 And research suggests that African Americans may suffer from “clinician bias and misdiagnosis” due to differing presentations of self-reported depression symptoms.9 In addition, those who are diagnosed often experience more severe and disabling symptoms than those experienced by other races and ethnicities.10
Only 35.1 percent of Latinx Americans with mental illness receive treatment annually—the US average is 46.2 percent.11 Unique barriers to care, including stigma vis-à-vis mental health, language discrepancies, and poverty, put Latinx people in the United States at higher risk of receiving inadequate treatment than the broader population.12
Mental health issues are also vastly more severe, undiagnosed, and untreated among Native Hawaiian and Pacific Islander (NHPI) communities: a little over one in three NHPI teens in the California public school system report feeling depressed (grades 7, 9, and 11); and 22 percent of 11th graders have considered suicide, which is well above California’s average of 16 percent for that age group.13 Exacerbating this mental health crisis is the stigma NHPI individuals report regarding mental illness, impeding their ability to get help.14 A dearth of mental health providers with the cultural understanding needed to work with NHPI youth can also lead to their misdiagnosis and underdiagnosis.15
Suicide rates are highest among Native Americans compared to any other racial group in America, according to the Centers for Disease Control.16 Research has also found that the prevalence of PTSD among Native American reservation populations is “two to three times more likely” than that of the broader population.17
And although nearly 18 percent of the broader US population seeks mental health help, just 8.6 percent of Asian Americans (“Asian Indian, Cambodian, Chinese, Indonesian, Korean, Taiwanese, Thai, and Vietnamese”) do.18 Those who were surveyed reported that they felt tremendous pressure to be academically or professionally successful, and that to stay focused, they needed to ignore or deny mental health needs.19
Scientific racism is still rife in the United States. Just three years ago, the American Psychiatric Association admitted to a history replete with discrimination, abusive experimentation, and victimization of BIPOC communities “in the name of scientific evidence.”
The stereotypes and barriers differ from group to group, but the outcome overall is consistently worse for minorities suffering from mental illness: across the board, minority groups are more likely to delay, or altogether avoid, seeking mental health treatment, further deepening the disparities.20 As a result, the diagnosis divergences among differing ethnic groups could well be due to cultural stigmas around mental health, inhibited access to proper diagnosis, and research and practitioner biases rather than actual differences in symptoms experienced.
Cost is also a hindrance. AI could play a crucial role in improving accessibility of unbiased, accurate diagnostic tools for diverse populations at a fraction of the cost—a cost that is a heavy burden both for individuals and the country. Yale University’s latest estimates also propose that the country’s failing mental health systems cost the United States $282 billion every year.21 Most important, unsupportive infrastructure is costing lives. Depression diagnosis from a licensed psychiatrist can cost between $100 to $300 without insurance,22 while therapy can range from $65 to $250 per session23—and antidepressants can be nearly $100 for a 30-day supply.24 Meanwhile, as of 2022, 17.1 percent of Black Americans live below the poverty line (the number is 7.7 for White Americans), rendering any of these options impossible without financial assistance.25 And so, unsurprisingly, Black Americans are far more likely to turn to emergency departments than to mental health specialists.26 This delays diagnosis and treatment until it’s too late for early detection and prevention—and it is just one example of many of the burden of impossible costs on communities that often need mental health services the most.
Problems of cost, of course, are rooted in economic injustice, racial injustice, and the commodification of profit-motivated healthcare in the United States. That said, if done well and carefully, AI could be used to help cut costs in some substantive areas. For example, in the very first stages of the diagnosis process, AI is being said to lead to “tremendous cost savings.”27 And a single therapy session can often cost the same as a yearlong subscription to depression treatment apps facilitated by AI.28 AI is even helping clinicians check in on patients to monitor changes in mental health from afar. AI can tap into smartphone data, checking in on circadian rhythm changes, disruptions of which could indicate anxiety. Such AI-facilitated monitoring can reduce the number of in-person follow-up appointments needed for those in care.29
And in a trial of a little under 65,000 British patients suffering from anxiety, those who used AI for diagnosis and treatment recommendations were found to be 58 percent more likely to experience recovery. Meanwhile, those who took the standard mental health referral process only had a 27.4 percent recovery rate.30 Apparently, ease of use, accessibility, and reduced wait times led to reduced treatment dropout rates, improved accuracy of treatment, and increased “recovery rates.”31 Such results show that AI could at least potentially help to diagnose communities unable to afford traditional avenues for diagnosis.
Similarly, AI-powered apps are proving effective for depression diagnosis. From a study using Twitter, AI has been shown to be able to assess language used in posts with 92 percent accuracy to determine whether the poster has depression.32 AI-powered chatbots that mirror human therapists have also been successful in reducing depressive symptoms.33
Failures in Racial and Ethnic Representation
The potential AI bridges noted above, of course, can only work as well as they are designed. One of the most pressing concerns is the composition of the datasets used to develop AI models. The data fueling AI’s algorithms are more often than not grossly unrepresentative of the broader population, leading to biased outcomes that reinforce existing disparities in healthcare.
It is by no accident that the US medical system has ended up where it is. In as early as the 16th century, enslaved Africans were subject to medical experimentation on US plantations.34 They were repeatedly exploited for painful experiments, directed by White physicians, to support egregious claims—such as insanity rates being higher in Black communities living farther north (where slavery either wasn’t as prevalent or did not exist).35 In the 1800s, physicians (Samuel Morton and Charles Caldwell, for two) led studies on skull sizes, concluding that enslaving Africans was acceptable because their skulls appeared to be “tamable.”36 Physician Samuel Cartwright even invented two mental illnesses, “drapetomania” and “dysaethesia aethiopica,” alleging that they caused “enslaved people to run away, perform subpar work, and not feel pain from physical punishment.”37
Scientific racism is still rife in the United States. Just three years ago, the American Psychiatric Association admitted to a history replete with discrimination, abusive experimentation, and victimization of BIPOC communities “in the name of scientific evidence.”38 The APA said its “leaders actively supported eugenics for decades, calling for sterilization initiatives for ‘unfit and inferior races.’”39 And federally financed programs have been accused of sterilizing 25 to 42 percent of Native American women in the 1970s—and it is reported that nearly 25 percent of Native American and Alaska Native women have undergone sterilization.40
So, it’s not surprising that most clinical trials are still vastly overrepresented by White men—not only because science tends to overindex on this demographic cohort but also because there is no reason for BIPOC communities to trust that their data will now be used to help rather than hinder, or even injure, their health. The crux of the issue here is that AI models will aid these communities only if the models can access their data. And if AI diagnosis systems are built off homogenized data from White, male Americans, obviously we cannot be confident that it will work accurately for everyone else.
One way to foster confidence in this field is by improving the range of diversity vis-à-vis medical practitioners. Patients from underrepresented communities benefit from having access to doctors that reflect their communities, but racial representation among doctors doesn’t come close to mirroring the demographic makeup of the United States. For instance, only 5 percent of American doctors are Black. These doctors, while rare, are an important and impactful mechanism for enrolling more Black Americans in crucial clinical trials.41 A decade-long systematic review of cognitive neuroscience research revealed that just 10 percent of studies reported race, and merely 4 percent reported ethnicity.42 Without a meaningful fix to this severe rate of underreporting, AI systems are highly likely to overlook variations that could occur across different population groups. Worst of all, these automated processes wouldn’t even know the demographic information of the datasets they’re tapping into.
Sign up for our free newsletters
Subscribe to NPQ's newsletters to have our top stories delivered directly to your inbox.
By signing up, you agree to our privacy policy and terms of use, and to receive messages from NPQ and our partners.
This glaring underrepresentation means, for instance, that we can’t even identify whether different racial and ethnic groups have different neurological markers for the same conditions—though, to be clear, the degree of genetic and psychological differences among races and ethnic groups is under debate; it is certainly well understood that race is primarily a social, not biological, construct. And without meticulous ethical considerations and a commitment to responsible research practices, such claims risk devolving into harmful stereotypes and perpetuating systemic inequalities. Cultural and psychosocial elements, however, do have distinct influences on BIPOC communities when it comes to mental health.43
AI models trained on datasets that lack this level of diversity are ill-equipped to recognize such variations. Take, for example, the fact that Black Americans are diagnosed with psychotic disorders three to four times more often than White Americans.44 While explanations for this variance are highly debated, research suggests racialized associations were made between African Americans exhibiting psychosis symptoms and so-termed “disruptive or socially deviant behavior patterns,” and as a result, Black Americans are more likely to be aggressively diagnosed.45 Essentially, in research where doctors did not know a patient’s race (meaning, diagnosis was “blinded”), Black and White patients received similar rates of depression and manic symptoms diagnosis.46 But once doctors knew their patients’ races, Black Americans were much more likely to be diagnosed with schizophrenia—because their symptoms of psychosis were given more weight.47
[U]nrepresentative data pose a serious risk vis-à-vis treatment eligibility. In the United States, for instance, AI systems could soon influence insurance companies and healthcare providers making decisions regarding diagnosis and treatment eligibility.
A tool built off the back of datasets replete with incorrect diagnoses of psychosis in Black patients—for example—would exponentially worsen existing biases in diagnosis. This poses a huge danger to marginalized groups, as AI systems will misdiagnose, overlook, or overindex on the wrong biomarkers.
In addition, unrepresentative data pose a serious risk vis-à-vis treatment eligibility. In the United States, for instance, AI systems could soon influence insurance companies and healthcare providers making decisions regarding diagnosis and treatment eligibility. Without a concerted effort to integrate a broader range of demographic data into AI models, already marginalized populations will be further sidelined by the very technologies meant to serve them.
The importance of diverse data also becomes evident when looking at the details of specific conditions. For example, research has shown that people with familial MDD often have anatomical differences in their corpus callosum (nerves connecting the brain’s hemispheres) compared to those without familial MDD.48 AI could leverage this study to develop a robust diagnosis tool for heritable MDD. The study makes no mention of whether or not the race and ethnicity of the patients in this research were recorded, however—leaving a critical gap in understanding how these anatomical variations might differ across demographic groups.49 Without this information, the data are inherently skewed, and if AI were to leverage it, then technology would both intensify and speed up the degree to which biased medical tools propagate.
Finally, there’s a host of research investigating how AI might improve the identification of depression biomarkers.50
A biomarker represents a tangible measurement, such as cortisol levels, which could indicate key information about a specific condition—such as MDD, which in its acute form has been found to be associated with elevated cortisol resulting from stress.51 First, of course, racial bias in research would need to be corrected. In any event, while much progress has been made in pinpointing biomarkers associated with a number of mental illnesses, there are no such FDA-approved tests just yet.52
For now, biomarker testing is most commonly used for cancer diagnosis and treatment—although not all states have biomarker coverage through insurance, and the majority of oncology providers cite this as a barrier to providing appropriate testing for patients.53 But if properly used for depression diagnosis, biomarker assessment could be automated, helping to mitigate clinicians’ biases, which result in the under- or overdiagnosis of minority groups.54 AI and neuroscience could be working together to forge robust, science-based training and diagnostic programs for clinicians. Diagnosis would be based on objective neurological data rather than the Hamilton Depression Rating Scale (the most widely used clinical-administered assessment), for which there’s limited evidence of its accuracy among Black Americans.55 The latest scientific literature on African American depression highlights measurements and symptoms that are not included on the HDRS.56 Long considered a “gold standard,” the diagnostic test is really just a White standard.57
Machine learning models can already identify the brain regions involved in depression.58 Depression is associated with unusual connectivity patterns in frontostriatal and limbic networks (responsible for adaptivity and emotion regulation and memory, respectively).59 And a person’s treatment success with medication or therapy, for instance, will be dependent on their brain’s connectivity in those regions. If learning models are being trained with datasets that include ethnically diverse populations, then AI might be able to deliver this type of unbiased depression testing. But it’s all a lost cause if AI is working off purely White, male brain scans.
Efforts to clean and curate data used in big data ingestion for AI models will be fundamental to building inclusive datasets. If these biases aren’t acknowledged and corrected, AI will only perpetuate and feed into these systemic inequities.
***
Given the historical, inherent biases in society generally and healthcare specifically, AI-driven advancements are not going to serve minority groups as a matter of course. Unless they are tailored to represent and serve all communities equally, they will exacerbate existing biases and disparities.
We need representative data. How do we get there? The perhaps trivial-seeming answer is trust.
Clinical-trial diversity is best fostered through community engagement from more researchers, outreach professionals, and doctors who represent those very same minority communities.60 A number of organizations are exploring how to ensure that medical data are accurate and representative. The efforts are valiant, but healthcare must move faster than other industries already in the thick of AI transformation. Almost a third of the planet’s data (30 percent) is created by the healthcare industry alone. And that figure is increasing at a rate faster than even technology and finance.61 The Surveillance Technology Oversight Project and the Center for Democracy & Technology are already addressing algorithmic inequalities with public education, research, and regulatory advocacy.62 Unfortunately, as is usually the case with entrenched, complex problems, there is no quick and easy solution. Where there is promise regarding the role AI might play in helping to improve the state of mental healthcare, there is also peril.
Depression Treatment and AI: Magnetic Stimulation
Beyond diagnosis, the latest neuroscientific discoveries for treatment-resistant depression could also harness AI for better patient outcomes. Transcranial Magnetic Stimulation (TMS) is a noninvasive treatment first approved by the FDA in 2008.63 The procedure uses an electromagnetic coil to deliver a magnetic pulse to the part of the brain responsible for mood control. It’s not entirely painless, but side effects, described as “mild to moderate,” are said to lessen over time with additional sessions.64 Studies report that TMS has shown promise, alleviating severe depressive symptoms for 50 to 60 percent of eligible patients and according full remission for roughly one-third of those patients.65 But it’s expensive and remains out of reach for many, ranging from $6,000 to $15,000 per course.
Opportunity gaps in the diagnosis and treatment of mental illness render AI’s potential impact particularly promising when it comes to neuroscience and mental health.
While TMS is not itself an AI-driven technology, AI has the potential to skyrocket its effectiveness and accessibility. A new TMS study is leveraging AI to assess 60,000 brain scans for that very purpose.66 The goal is to better predict who is most likely to benefit from TMS treatment. Ideally, AI will help to improve TMS’s success rates and cut costs for patients.67
With greater accessibility come heightened risks and questions of ethical ramifications. For instance, if individuals are in dire mental distress, will they be able to appropriately consent to TMS treatment? Moreover, what influence could loved ones have on a patient’s decision to undergo treatment? And if insurance customers/policyholders are unwillingly being tested with these genres of systems, there are also concerns over who owns that individual’s data once AI becomes involved. Questions of data possession rights and the right to self-determination are massive ones that must be addressed by neuroethicists.
Indeed, as with other potential applications of AI in neuroscience, there are already concerns about the lack of representative data for TMS research. Of the 60,000 brain scans used in the recent wide-scale TMS trial, no demographic data are available to the public. Most TMS studies have failed to include the racial or ethnic data of their participants, and there is no reason to believe that the new study differs.68 This lack of transparency makes it impossible to assess how well the AI algorithms could serve minority populations, raising further questions about fairness and equity regarding access to cutting-edge treatments.
Such lack of representation is a key reason minority groups often feel disconnected from the healthcare system, leading many to avoid engaging with it altogether. This is especially true for those struggling with mental health. Only 25 percent of African Americans seek needed mental health treatment, compared to 40 percent of White Americans.69 And when they do, it’s often much too late.70
AI has the potential to change this dynamic and to be especially transformative for communities in which mental health conditions bear massive stigmas. AI-driven diagnoses, built from inclusive datasets, would validate their experiences and could help patients and their families understand depression as a legitimate medical condition rather than simply a negative outlook or bad mood. And with clear, unbiased information at the helm, AI could also increase the likelihood of seeking treatment by encouraging more proactive engagement from underserved populations.
Notes
- Tom Macpherson et al., “Natural and Artificial Intelligence: A brief introduction to the interplay between AI and neuroscience research,” Neural Networks 144 (December 2021): 603–13.
- Chellammal Surianarayanan et al., “Convergence of Artificial Intelligence and Neuroscience towards the Diagnosis of Neurological Disorders—A Scoping Review,” Sensors 23, no. 6 (March 2023): 3062.
- Caleb J. Colón-Rodríguez, “Shedding Light on Healthcare Algorithmic and Artificial Intelligence Bias,” Office of Minority Health News, Office of Minority Health, S. Department of Health and Human Services, July 12, 2023, minorityhealth.hhs.gov/news/shedding-light-healthcare-algorithmic-and-artificial-intelligence-bias.
- “Guidance Regarding Methods for De-identification of Protected Health Information in Accordance with the Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule,” Office for Civil Rights, S. Department of Health and Human Services, November 26, 2012, www.hhs.gov/hipaa/for-professionals/special-topics/de-identification/index.html#:~:text=As%20discussed%20below%2C%20the%20Privacy%20Rule%20provides,used%20alone%20or%20in%20combination%20with%20other.
- Khaled El Emam et al., “A Systematic Review of Re-Identification Attacks on Health Data,” PLoS One 6, no. 12 (December 2011): e28071; and Khaled El Emam et al., “Correction: A Systematic Review of Re-Identification Attacks on Health Data,” PLoS One 10, no. 4 (April 2015): e0126772.
- “Neuroethics,” The BRAIN Initiative, National Institutes of Health, accessed December 9, 2024, nih.gov/research/neuroethics.
- Rahn Kennedy Bailey, Josephine Mokonogho, and Alok Kumar, “Racial and ethnic differences in depression: current perspectives,” Neuropsychiatric Disease and Treatment 15 (February 2019): 603–09.
- Thomas McGuire and Jeanne Miranda, “Racial and Ethnic Disparities in Mental Health Care: Evidence and Policy Implications,” Health Affairs (Millwood) 27, no. 2 (2008): 393–403.
- Amanda Woodward et al., “Major Depressive Disorder among Older African Americans, Caribbean Blacks, and Non-Hispanic Whites: Secondary Analysis of the National Survey of American Life,” Depression and Anxiety 30 (2013): 589–97.
- Bailey, Mokonogho, and Kumar, “Racial and ethnic differences in depression.”
- “Hispanic/Latinx,” National Alliance on Mental Illness, accessed November 21, 2024, nami.org/your-journey/identity-and-cultural-dimensions/hispanic-latinx/.
- Ibid.
- A Child is a Child 2023 Snapshot: Hawaiian and Pacific Islander Children’s Health (Los Angeles, CA: The Children’s Partnership, 2023).
- Andrew Subica et al., “Mental illness stigma among Pacific Islanders,” Psychiatry Research 273 (March 2019): 578–85.
- Gayle Iwamasa, “Recommendations for the Treatment of Asian-American/Pacific Islander Populations,” American Psychological Association, 2012, www.apa.org/pi/oema/resources/ethnicity-health/asian-american/psychological-treatment.
- “Native American Mental Health: What You Need To Know,” Mass General Brigham, McLean Hospital, accessed November 21, 2024, mcleanhospital.org/essential/native-american-mh.
- Ibid.
- Koko Nishi, “Mental Health Among Asian-Americans,” Students’ Corner, American Psychological Association, 2012, apa.org/pi/oema/resources/ethnicity-health/asian-american/article-mental-health.
- Sunmin Lee et al., “Model Minority at Risk: Expressed Needs of Mental Health by Asian American Young Adults,” Journal of Community Health 34, no. 2 (April 2009): 144–52.
- Bailey, Mokonogho, and Kumar, “Racial and ethnic differences in depression.”
- Mike Cummings, “Novel study quantifies immense economic costs of mental illness in the S.,” Yale News, April 22, 2024, news.yale.edu/2024/04/22novel-study-quantifies-immense-economic-costs-mental-illness-us.
- BetterHelp editorial team, “What Is A Psychiatrist And How Much Does Psychiatry Cost?,” BetterHelp, last modified October 22, 2024, betterhelp.com/advice/psychiatry/how-much-does-a-psychiatrist-cost/.
- “How Much Does Therapy Cost?,” GoodTherapy Blog, GoodTherapy, accessed November 22, 2024, goodtherapy.org/blog/faq/how-much-does-therapy-cost.
- “How Much Antidepressants Cost Without Insurance And Other Costs of Depression,” Enhance Health, February 23, 2023, com/blog/how-much-antidepressants-cost-without-insurance-and-other-costs-of-depression/.
- Em Shrider, “Black Individuals Had Record Low Official Poverty Rate in 2022,” United States Census Bureau, September 12, 2023, census.gov/library/stories/2023/09/black-poverty-rate.html.
- M. J. Arnett et al., “Race, Medical Mistrust, and Segregation in Primary Care as Usual Source of Care: Findings from the Exploring Health Disparities in Integrated Communities Study,” Journal of Urban Health 93, no. 3 (May 2016): 456–67.
- Narendra Khanna et al., “Economics of Artificial Intelligence in Healthcare: Diagnosis vs. Treatment,” Healthcare (Basel) 10, no. 12 (December 2022): 2493.
- Ibid.; and Eugene Klishevich, “How AI Is Expanding The Mental Health Market,” Forbes, June 25, 2024, www.forbes.com/councils/forbestechcouncil/2024/06/25/how-ai-is-expanding-the-mental-health-market/.
- Sarah Darley et al., “Understanding How the Design and Implementation of Online Consultations Affect Primary Care Quality: Systematic Review of Evidence With Recommendations for Designers, Providers, and Researchers,” Journal of Medical Internet Research 24, no. 10 (2022): e37436. And see Taylor A. Braund, “Smartphone Sensor Data for Identifying and Monitoring Symptoms of Mood Disorders: A Longitudinal Observational Study,” JMIR Mental Health 9, no. 5 (2022): e35549.
- Max Rollwage et al., “Using Conversational AI to Facilitate Mental Health Assessments and Improve Clinical Efficiency Within Psychotherapy Services: Real-World Observational Study,” JMIR AI 2 (2023): e44358.
- Ibid.
- AbdelMoniem Helmy, Radwa Nassar, and Nagy Ramdan, “Depression detection for twitter users using sentiment analysis in English and Arabic tweets,” Artificial Intelligence in Medicine 147 (January 2024): 102716.
- Hao Liu et al., “Using AI chatbots to provide self-help depression interventions for university students: A randomized trial of effectiveness,” Internet Interventions 27 (March 2022): 100495.
- Inez Ruiz-White et al., “Racial and Ethnic Disparities in Physical and Mental Health Care and Clinical Trials,” Journal of Clinical Psychiatry 84, no. 4 (June 2023): 23ah14887.
- Ibid.
- Ibid.
- Ibid.
- “APA’s Apology to Black, Indigenous and People of Color for Its Support of Structural Racism in Psychiatry,” American Psychiatric Association, January 18, 2021, psychiatry.org/news-room/apa-apology-for-its-support-of-structural-racism.
- Tori DeAngelis and Efua Andoh, “Confronting past wrongs and building an equitable future,” Monitor on Psychology 53, no. 2 (March 2022): 22.
- Jane Lawrence, “The Indian Health Service and the Sterilization of Native American Women,” American Indian Quarterly 24, 3 (Summer 2000): 400–419; Christina J. J. Cackler, Valerie B. Shapiro, and Maureen Lahiff, “Female Sterilization and Poor Mental Health: Rates and Relatedness among American Indian and Alaska Native Women,” Women’s Health Issues 26, no. 2 (March–April 2016): 168–75; and Sandra Knispel, “Native Americans, government authorities, and reproductive politics,” News Service, University of Rochester, October 23, 2019, airc.ucsc.edu/resources/suggested-lawrence.pdfp.2/www.rochester.edu/newscenter/native-americans-government-authorities-and-the-reproductive-politics-403792/.
- Ruiz-White et al., “Racial and Ethnic Disparities in Physical and Mental Health Care and Clinical Trials.”
- Elijah Sterling et al., “Demographic reporting across a decade of neuroimaging: a systematic review,” Brain Imaging and Behavior 16 (2022): 2785–96.
- Tori DeAngelis, “Reimagining mental health for communities of color,” American Psychological Association, last modified March 7, 2022, apa.org/monitor/2021/10/career-bipoc-communities.
- Robert C. Schwartz and David M. Blankenship, “Racial disparities in psychotic disorder diagnosis: A review of empirical literature,” World Journal of Psychiatry 4, no. 4 (2014): 133–40.
- Robert C. Schwartz and Kevin P. Feisthamel, “Disproportionate Diagnosis of Mental Disorders Among African American Versus European American Clients: Implications for Counseling Theory, Research, and Practice,” Journal of Counseling & Development 87, no. 3 (Summer 2009): 295–301.
- Michael Gara et al., “Influence of Patient Race and Ethnicity on Clinical Assessment in Patients With Affective Disorders,” Archives of General Psychiatry 69, no. 6 (2012).
- Ibid.
- See Eleonore van Sprang et al., “Familial risk for depressive and anxiety disorders: associations with genetic, clinical, and psychosocial vulnerabilities,” Psychological Medicine 52, no. 4 (July 2020): 696–706; and Acioly L. T. Lacerda et al., “Anatomical MRI study of corpus callosum in unipolar depression,” Journal of Psychiatric Research 39, no. 4 (July 2005): 347–54.
- Lacerda et al., “Anatomical MRI study of corpus callosum in unipolar depression.”
- Fabeha Zafar et al., “The role of artificial intelligence in identifying depression and anxiety: A comprehensive literature review,” Cureus 16, no. 3 (March 2024): e56472.
- Sanjay Nandam et al., “Cortisol and Major Depressive Disorder—Translating Findings From Humans to Animal Models and Back,” Frontiers in Psychiatry 10 (January 2020): 974.
- Yunus Hacimusalar and Ertu˘srul Eşel, “Suggested Biomarkers for Major Depressive Disorder,” Archives of Neuropsychiatry 55, no. 3 (May 2018): 280–90.
- ADVI, Payer Coverage Policies of Tumor Biomarker and Pharmacogenomic Testing (Washington, DC: American Cancer Society Action Network, 2023).
- Carolin Zierer, Corinna Behrendt, and Anja Christina Lepach-Engelhardt, “Digital biomarkers in depression: A systematic review and call for standardization and harmonization of feature engineering,” Journal of Affective Disorders 356 (July 2024): 438–49.
- Pim Cuijpers et al., “Self-reported versus clinician-rated symptoms of depression as outcome measures in psychotherapy research on depression: A meta-analysis,” Clinical Psychology Review 30, no. 6 (August 2010): 768–78.
- Amy Zhang and Faye Gary, “Discord of Measurements in Assessing Depression among African Americans with Cancer Diagnoses,” International Journal of Culture and Mental Health 6, no. 1 (December 2011): 58–71.
- Adi Berko et al., “Development and evaluation of the HRSD-D, an image-based digital measure of the Hamilton rating scale for depression,” Scientific Reports 12 (2022): 14342.
- Leonardo Tozzi et al., “Personalized brain circuit scores identify clinically distinct biotypes in depression and anxiety,” Nature Medicine 30 (2024): 2076–87.
- Andrew Drysdale et al., “Resting-state connectivity biomarkers define neurophysiological subtypes of depression,” Nature Medicine 23, no. 1 (December 2016): 28–38.
- Ruiz-White et al., “Racial and Ethnic Disparities in Physical and Mental Health Care and Clinical Trials.”
- Christophe Weber, “Data and trust: the two pillars of value-based healthcare,” World Economic Forum, Jan 17, 2024, weforum.org/stories/2024/01/value-based-healthcare-data-trust/.
- “About Us,” Surveillance Technology Oversight Project (S.T.O.P.), accessed November 24, 2024, www.stopspying.org/; and Robert Gorwa and Dhanaraj Thakur, Real Time Threats: Analysis of Trust and Safety Practices for Child Sexual Exploitation and Abuse (CSEA) Prevention on Livestreaming.
- Cohen Healthcare Law Group, “The FDA Transcranial Magnetic Stimulation Approval Process,” accessed November 22, 2024, com/2023/06/the-fda-transcranial-magnetic-stimulation-approval-process/.
- “Transcranial magnetic stimulation,” Mayo Clinic, accessed November 22, 2024, mayoclinic.org/tests-procedures/transcranial-magnetic-stimulation/about/pac-20384625.
- Adam Stern, “Transcranial magnetic stimulation (TMS): Hope for stubborn depression,” Harvard Health Blog, Harvard Health Publishing, Harvard Medical School, October 27, 2020, www.health.harvard.edu/blog/transcranial-magnetic-stimulation-for-depression-2018022313335. See also Anish Mitra et al., “Targeted neurostimulation reverses a spatiotemporal biomarker of treatment-resistant depression,” Proceedings of the National Academy of Sciences 120, no. 21 (May 2023): e2218958120.
- Sidney Taiko Sheehan, “New AI-driven initiative could optimize brain stimulation for treatment resistant depression,” Keck School of Medicine of USC, news release, February 14, 2024, usc.edu/news/new-ai-driven-initiative-could-optimize-brain-stimulation-for-treatment-resistant-depression/; and “Global Deep Learning Initiative to Understand Outcomes in Major Depression,” Abstract/Proposal, Research Portfolio Online Reporting Tools (RePORT), National Institutes of Health, 2023, reporter.nih.gov/search/wP0SigCRtES2p0RO3cMndQ/project-details/10735255.
- Sheehan, “New AI-driven initiative could optimize brain stimulation for treatment resistant depression.”
- Based on this 10-year analysis of TMS data, which notes, “The treatment response was tested with a logistic regression model including age, gender, marital status, educational status, and diagnosis,” there is no mention of race/ethnicity. See Abdullah Bolu et al., “Ten years’ data of Transcranial Magnetic Stimulation (TMS): A naturalistic, observational study outcome in clinical practice,” Psychiatry Research 301 (May 2021): 113986.
- “Black Mental Health: What You Need to Know,” McLean Hospital, Mass General Brigham, accessed November 22, 2024, mcleanhospital.org/essential/black-mental-health.
- Ibid.