
Editors’ note: This piece is from Nonprofit Quarterly Magazine’s winter 2024 issue, “Health Justice in the Digital Age: Can We Harness AI for Good?”
During the COVID-19 pandemic, health justice advocacy groups adapted their strategies to the digital realm, leveraging social media platforms, virtual events, and other online resources to raise awareness and organize their efforts.1 These efforts led to structured engagements with state health agencies toward improving care quality and advocating for immediate and systemic reforms.2 Many organizations with brick-and-mortar structures returned to their pre-COVID-19 operations once restrictions subsided, but a significant number of them chose to retain and enhance their digital presence—reflecting a lasting change in the way advocacy is conducted.
This transition to the digital realm facilitated a brave new world of health advocacy, but it also magnified the digital divide that plagues society, exposing deep disparities in access to technology and digital literacy.3 These developments and challenges underscore the importance of building a better understanding of digital technology within the broader context of health justice; they also emphasize the urgent need to address the inequities exacerbated by digital technology, to ensure that technological efforts to improve healthcare are equitable and rooted in promoting health justice.
Equal Access to the Healthcare We Need
Ensuring equitable access to healthcare is a cornerstone of health justice, but accessing quality healthcare can be daunting. Geographic isolation, lack of infrastructure (limited doctors and medical centers, especially in rural areas), and financial constraints (extreme costs of life-saving medications and critical medical services not covered or denied by insurance) often leave vulnerable populations without necessary care. The rise of digital platforms and AI is breaking down these barriers, helping to bring medical services to people who otherwise might not receive them.4
[W]hile digital platforms and AI hold promise for democratizing healthcare and policy advocacy, these technologies do not always work, and they have an especially poor track record for people of color and other marginalized populations.
Telemedicine, for example, powered by digital platforms, is a transformative force in improving access. It brings healthcare directly to patients, regardless of location, and is a game changer for many. By leveraging video conferencing, online chat, and mobile health apps, telemedicine connects patients with healthcare providers in real time. And in areas where healthcare resources are scarce, telemedicine becomes a lifeline. For instance, in regions with a shortage of medical specialists, patients might otherwise have to wait months for an appointment. Telemedicine allows them to access care more quickly, potentially catching and treating conditions before they become severe.5
The Veterans Health Administration has been a pioneer in telemedicine in this regard since 2003. Expansion of its tech infrastructure and services over the years proved especially valuable during the pandemic, and it now uses telehealth for 40 percent of its patients, thus demonstrating tech’s potential for improving healthcare access.6 And over a decade ago, California began addressing patients’ needs for telemedicine services with AB 415, the Telehealth Expansion Act of 2011 (also known as the Telehealth Advancement Act). The act allowed Medi-Cal patients (those qualifying for state medical assistance) to consent to telehealth care and health providers to provide that care.7 Almost a decade later, in 2022, California AB 32, introduced by Assemblymember Cecelia Aguiar-Curry, was enacted to make permanent many of the telemedicine flexibilities introduced during the COVID-19 pandemic.8 The bill expanded telemedicine services by allowing for the continued use of various modalities, including audio-only telemedicine, which is crucial for individuals in areas with limited broadband access.9 This flexibility addresses health disparities by ensuring that healthcare services are available to vulnerable populations, such as those living in rural areas or in low-income communities, or people without reliable internet.
Moreover, telemedicine can be particularly transformative for those with chronic illnesses who require regular monitoring. Instead of making frequent trips to a clinic, patients can use telemedicine to check in with their healthcare provider, receive advice on managing their condition, and adjust treatment plans as needed—all from the comfort of their home. This continuity of care is critical in managing chronic diseases and improving long-term health outcomes.10
Health justice advocates must emphasize transparency in data use, accountability in AI decision-making, and inclusivity in access to digital health innovations. By doing so, we can ensure that these tools benefit all communities, particularly those historically marginalized.
AI-driven diagnostic tools further amplify the power of telemedicine. These tools are designed to assist healthcare providers in making accurate diagnoses, especially in settings where access to sophisticated medical equipment is limited. AI-powered apps can analyze images, such as X-rays or photos of skin lesions, and provide preliminary assessments that help doctors make quicker and more accurate diagnoses.11 A healthcare worker in a remote clinic might use a mobile app to capture an image of a patient’s skin condition, and AI can analyze data from wearable devices that monitor vital signs like heart rate, blood pressure, and oxygen levels.12 Such diagnostic tools reduce the need for expensive, bulky medical equipment and make healthcare more accessible to those in underserved areas. Providing accurate, data-driven assessments empowers healthcare providers to deliver high-quality care, even in resource-constrained settings.
But while digital platforms and AI hold promise for democratizing healthcare and policy advocacy, these technologies do not always work, and they have an especially poor track record for people of color and other marginalized populations. In a CNN report/interview on AI, the reporter, Zachary B. Wolf, quotes Reid Blackman, the author of Ethical Machines: Your Concise Guide to Totally Unbiased, Transparent, and Respectful AI: “The bias issue, or discriminatory AI, is a separate issue….Remember: AI is just software that learns by example. So if you give it examples that contain or reflect certain kinds of biases or discriminatory attitudes…you’re going to get outputs that resemble that.”13
Indeed, these technologies often fail to work as intended for people of color, due to biases in datasets and algorithmic design. Critically, AI-driven diagnostic tools have been found to perform poorly on non-White patients, resulting in inaccurate diagnoses or subpar care.14
Additionally, economic barriers limit the utilization of wearable health devices, further exacerbating inequities. While providing accurate, data-driven assessments can empower healthcare providers to deliver high-quality care in resource-constrained settings, this potential is undermined by the uneven accessibility and reliability of the very technologies that are supposed to drive equity. It is crucial that as we advocate for AI and digital health tools, we also work to address these systemic biases and economic disparities, ensuring that all communities, regardless of race or income, benefit from technological advancements in healthcare.15
Access to accurate and relevant health information is another critical component of health equity. However, barriers such as language, literacy levels, and cultural differences can prevent people from understanding or trusting the information they receive.16 AI and digital platforms are making health information more accessible and personalized. AI can help tailor health information to patients’ needs, taking into consideration language preferences, literacy levels, and cultural contexts.17 An AI-driven health app might provide information on managing diabetes in a user’s native language, for instance, using easy-to-understand language and culturally relevant examples. It might also include visual aids or interactive features that help users better understand their condition and how to manage it.18
The app mySugr, for example, designed for people living with diabetes, helps users track their blood sugar levels, log meals, and understand insulin management.19 The app uses AI to provide personalized insights and adjust to the user’s habits over time. It supports multiple languages and offers user-friendly features for managing diabetes.20 Another app, Lark Health, provides AI-powered coaching for users with chronic conditions like diabetes or hypertension. This app offers 24/7 personalized guidance on managing conditions and tracking diet, sleep, exercise, and medication. It’s designed to be conversational and accessible, offering a user-friendly experience tailored to diverse health needs.21
And digital platforms can disseminate critical health information broadly, ensuring that diverse populations are reached effectively. For instance, a public health campaign on vaccination could use social media, websites, and mobile apps, with AI tailoring messages to target different groups. In one community, the message might highlight the benefits for children’s health, while in another, it might focus on dispelling myths about vaccine safety. However, such efforts would have limitations in the current landscape: the digital divide leaves underserved populations without access, and data biases in AI would result in unequal messaging. Privacy concerns about sensitive health data and the challenge of ensuring cultural sensitivity would further complicate such campaigns. Additionally, overreliance on technology can exclude individuals who depend on more traditional forms of communication, like in-person consultations or local media, which would limit such a campaign‘s reach and effectiveness. And finally, there is the constant danger of misinformation that people receive via social media and other sources, further complicated by changes in government and health and information policies and regulations.
If we harness the power of digital platforms and AI responsibly and ethically, however, we could move closer to a world where healthcare is not a privilege reserved for those in urban centers or those who can afford it but rather a fundamental right available to everyone. This right is fundamental to health justice: ensuring that all individuals, regardless of their race, socioeconomic status, or geographic location, can achieve good health.
Health Disparities: Could AI Help?
Health disparities—differences in health outcomes and access to care across different populations—are a significant barrier to achieving health justice. These disparities are often deeply rooted in social determinants of health, such as income, education, housing, and access to nutritious food.22 Addressing these inequities requires a sophisticated understanding of where and why they exist as well as tailored interventions that meet the unique needs of affected communities.23
It is crucial that as we advocate for AI and digital health tools, we also work to address [the] systemic biases and economic disparities, ensuring that all communities, regardless of race or income, benefit from technological advancements in healthcare.
AI could play a transformative role in identifying these disparities and designing personalized health interventions to combat them. One of the most powerful applications of AI in healthcare is its ability to analyze vast amounts of data quickly and accurately. This capability is particularly valuable when identifying health disparities within communities, as such disparities are often complex and multifaceted, making them difficult to detect using traditional methods. Google Health, for example, has developed an AI system to analyze retinal images for diabetic retinopathy, a common complication of diabetes. The AI can detect early signs of this condition with high accuracy, which is especially useful with respect to underserved areas, where access to specialized eye care is limited. By identifying patients at risk earlier, healthcare providers can target interventions more effectively, reducing disparities in eye health outcomes.24
And because AI can process and analyze large datasets that include several variables, such as geographic location, demographic information, health outcomes, and access to healthcare services, AI can identify patterns and correlations by examining these variables together, offering significant advantages over manual analysis.25 It excels in recognizing intricate patterns and correlations, revealing subtle relationships that manual methods might miss, allowing for uncovering patterns and correlations and forecasting future health trends, disparities, and risks and enabling timely interventions—a subset of AI known as predictive modeling.26 Additionally, AI can handle large-scale data with scalability and accuracy, reducing human biases and minimizing errors associated with manual processing. This comprehensive, data-driven approach enhances the ability to identify and address health disparities, ultimately leading to more informed and effective healthcare decisions.27
Moreover, AI-driven data analysis extends beyond merely assessing health outcomes to encompass a broader range of factors, including social determinants of health, such as income, education, and housing, that contribute to health disparities.28 Google Health uses AI to analyze datasets that include demographic and environmental factors, identifying how elements like housing instability or educational attainment contribute to such disparities in health outcomes. In addressing not just the disparities but also the underlying causes, this approach allows for more effective and comprehensive public health interventions.29
Once health disparities have been identified through data analysis, the next step is to address them through targeted interventions. AI has value in this area, particularly in its ability to personalize healthcare based on individual circumstances, by analyzing data related to the social determinants and tailoring interventions accordingly.30
In addition to tailoring interventions based on social determinants, AI can personalize health education.31 Healthie, for instance, which bills itself as an “all-in-one practice management platform,” uses AI to generate plans and resources based on users’ health goals, conditions, and preferences. It adjusts recommendations and educational content dynamically as users engage with the platform and provide updates about their health status.32
AI’s ability to analyze complex datasets and predict health outcomes could revolutionize how we address health disparities.33 By identifying hidden inequities and tailoring interventions to individual needs, AI could ensure that resources are allocated where they are most needed and that care is delivered effectively and fairly. But while AI shows promise in identifying and addressing health disparities by integrating these complex data, its real-world effectiveness varies. Challenges such as data quality, representativeness, and integration of AI insights into actionable policies need to be addressed. Given the corporate dominance in healthcare and the broader context of capitalism, patriarchy, and White supremacy, AI could well further disenfranchise marginalized communities, if not very carefully managed. Moreover, incomplete and inaccurate health data, which often reflect and amplify existing disparities, pose a major obstacle to realizing AI’s full potential. As such, while AI’s capabilities are promising, there remains significant room for growth in refining these tools and improving their application to fully realize their impact on health disparities—with AI governance and data quality remaining critical concerns.34
The Risks of AI and Digital Health
Ensuring universal accessibility remains challenging indeed, with complex interfaces and language barriers hindering inclusion, particularly for marginalized groups.35 Reliance on digital platforms for health advocacy also exposes activists and marginalized citizens to risks such as data breaches, hacking, and privacy violations.36 And expanding digital health infrastructure will increase surveillance capabilities, raising concerns about privacy.37 Reliance on digital technologies controlled by external entities could lead to private, well-resourced interests controlling access to critical health services: health has increasingly become subject to private and equity-based investment, exposing underserved communities to manipulation and exploitation by commercial interests whose major mission is profit-making.38 And subsidized internet and technology devices, and other seemingly well-intended efforts, could wind up serving corporate interests rather than empowering vulnerable populations.
And while the digital landscape democratizes information dissemination and facilitates it, it can also lead to the rapid spread of misinformation and disinformation. Health justice advocates must prioritize media literacy and critical thinking to combat false narratives and misinformation and disinformation, and maintain the credibility of their campaigns and efforts.
Given the corporate dominance in healthcare and the broader context of capitalism, patriarchy, and White supremacy, AI could well further disenfranchise marginalized communities, if not very carefully managed.
Sign up for our free newsletters
Subscribe to NPQ's newsletters to have our top stories delivered directly to your inbox.
By signing up, you agree to our privacy policy and terms of use, and to receive messages from NPQ and our partners.
Additionally, AI-driven data collection and analysis will introduce forces that undermine health equity efforts unless greater forces disrupt corporatization and capitalism overall with respect to tech in health. A significant concern is the monetization of healthcare data, as private companies seek to profit from the vast amounts of personal health information collected through AI systems. This drive for profit currently overshadows the ethical considerations of equitable healthcare, as corporations prioritize proprietary algorithms and data ownership over public benefit. Without intervention, these companies will exploit AI for profit, exacerbating existing inequalities under a system shaped by corporatization, capitalism, and White supremacy.
Legal and ethical challenges surrounding data ownership, intellectual property rights, and liability are integral to this discussion, as is the risk that the most innovative tools will remain in the hands of the wealthy and powerful. To prevent these perils from becoming reality, there must be greater regulatory oversight and grassroots movements that disrupt corporate dominance in AI.39 Current initiatives—like open-source AI platforms and advocacy for data justice—are already challenging the corporatization of tech in healthcare, showing that change is possible if we prioritize collective good over profit.
Achieving health justice in an already unjust environment requires both structural reforms and ethical use of AI. To prevent AI from amplifying existing inequities, advocates need to develop policies that prioritize equitable care over profit, enforce transparency, and involve marginalized communities in the application of AI. Strong public oversight is essential to ensure that AI is used to redistribute resources to underserved populations and not to deepen disparities. By aligning AI with health justice principles, it becomes a tool to level the health playing field and address the root causes of already entrenched inequity.
***
As we navigate the brave new world of digital advocacy in pursuit of health justice, the transformative potential of digital platforms and artificial AI technologies is significant. Telemedicine has already expanded healthcare access in rural areas where residents once had to travel hours for basic care. AI-driven diagnostic tools are closing gaps in preventive care, particularly for those in underserved regions.40 And AI’s ability to analyze complex datasets is transforming our understanding of health disparities, enabling the identification of social determinants of health that contribute to unequal outcomes. This shift toward personalized health interventions, such as AI-powered recommendations for managing chronic diseases, moves the needle closer to healthcare as a fundamental right for everyone, not just a privileged few.
A significant concern is the monetization of healthcare data, as private companies seek to profit from the vast amounts of personal health information collected through AI systems. This drive for profit currently overshadows the ethical considerations of equitable healthcare.
However, despite these promising advancements, the journey toward health justice in the digital age comes with challenges. The expansion of digital health infrastructures could exacerbate issues of privacy violations, as seen in cases where health data are sold to third parties without consent.41 Vulnerable communities could be exploited for commercial gain, deepening the divide between those who can protect their digital footprint and those who cannot.
Furthermore, the persistent digital divide—such as the lack of broadband access in low-income neighborhoods—would leave marginalized groups even further behind. In 2020, for example, millions of children in underserved communities struggled to access remote schooling, a stark reminder of how these gaps worsen existing inequities. Finally, the persistent spread of misinformation and disinformation, as seen with COVID-19 vaccine skepticism, also threatens the credibility of health advocacy on digital platforms.
The path to health justice in an era dominated by corporatization, privatization, commercialization, capitalism, and White supremacy is already fraught with obstacles. While AI holds great promise, it could further complicate these challenges. The integration of AI into healthcare risks reinforcing existing inequities and biases if not carefully monitored. However, if we approach these technologies with skepticism, caution, and a commitment to ethical principles, there is hope that they can be harnessed to build a healthcare system that is just, equitable, and accessible for all. The key is ensuring that AI and digital innovations are developed and deployed with a focus on fairness, inclusion, accountability, and the greater good. It will be crucial to maintain vigilance and critically assess the social implications of these technologies. Health justice advocates must emphasize transparency in data use, accountability in AI decision-making, and inclusivity in access to digital health innovations. By doing so, we can ensure that these tools benefit all communities, particularly those historically marginalized.
The phrase “brave new world” carries a double-edged meaning, originating from William Shakespeare’s The Tempest and popularized by Aldous Huxley’s dystopian novel.42 Miranda’s curiosity in The Tempest symbolizes exploration and hope for a better future. However, Huxley’s Brave New World is a cautionary tale of a society in which progress and innovation come at the expense of individual freedom, authenticity, and justice. In the context of the digital age and artificial intelligence, this tension highlights the need for a balanced approach, embracing the transformative potential of digital and AI technologies while remaining vigilant against their unintended consequences.
Digital Activism in Action
Dignity Alliance Massachusetts
Dignity Alliance Massachusetts (DAM)is a leading advocate for inclusive health policies that prioritize vulnerable populations, particularly older adults and those with disabilities, amid the digital transformation. Established in 2020 during the COVID-19 pandemic, DAM initially aimed to amplify the voices of individuals in institutional care, especially in nursing homes. Over time, it evolved into a dynamic organization, functioning much like a decentralized autonomous organization. During the pandemic, DAM effectively leveraged digital tools such as social media, virtual events, and online campaigns to advocate for health justice, particularly in long-term care settings. DAM used webinars, petitions, and digital platforms to inform the public and policymakers about critical issues, such as nursing home safety, vaccination compliance, and transparency in ownership changes. Organizations like DAM are essential in advocating for a healthcare system that is both equitable and inclusive, pushing for policies that bridge the digital divide and ensure that digital health innovations reach underserved populations. Their advocacy promotes inclusive policies that prioritize the fair distribution of the innovations, ensuring that everyone, regardless of socioeconomic status, can benefit from modern healthcare.
By advocating for equitable access to these digital health tools, DAM reinforces that healthcare is a fundamental right, regardless of geographical location. The work of organizations like DAM highlights the transformative potential of digital platforms and technologies that can revolutionize healthcare by improving access, reducing health disparities, and empowering marginalized communities. However, DAM’s work underscores the importance of remaining vigilant about the potential risks associated with digital tools, such as increased surveillance, privacy violations, and exacerbating existing inequalities; DAM advocates for transparency, accountability, and inclusivity to ensure that its work benefits all communities, particularly those historically marginalized.
AARP
AARP’s extensive digital platform is a powerful tool that operates robust virtual advocacy campaigns focused on issues affecting older adults. This platform is a testament to the value of the older adult community. AARP frequently launches online petitions in healthcare to mobilize public support for legislative changes and address key issues such as prescription drug costs and accessibility. It promotes these efforts through AARP’s website and social media channels, recognizing the vital role of older adults in shaping healthcare policies.
These petitions, which have led to significant policy changes, demonstrate public support for specific policy changes and pressure legislators to act on these issues. AARP uses various social media platforms, including Facebook, X, and Instagram, to disseminate information, raise awareness, and engage followers in advocacy efforts. They also conduct campaigns around digital misinformation. Social media campaigns increase visibility and reach, engaging a wide audience and fostering a community of advocates who can share and amplify AARP’s messages.
AARP hosts virtual town halls and webinars to inform members and the general public about important policy issues, upcoming legislation, and advocacy strategies. These events often feature expert speakers, including policymakers and advocacy leaders. The virtual events provide a platform for direct interaction between AARP members and policymakers, allowing for real-time dialogue, questions, and feedback. They also help educate and mobilize supporters to participate in advocacy efforts. AARP uses email campaigns and digital newsletters to inform its members about current issues, upcoming advocacy actions, and ways to get involved. These communications often include calls to action, such as contacting legislators or participating in online forums. Email campaigns and newsletters help keep members engaged and informed, encouraging them to take action on critical issues and stay updated on advocacy progress.
AARP’s interactive platforms allow members to easily contact their representatives, track legislation, and participate in advocacy campaigns. By providing tools for direct advocacy, AARP makes it easier for individuals to engage with the legislative process and contribute to advocacy efforts.
National Disability Rights Network
The National Disability Rights Network (NDRN) advances the rights of individuals with disabilities through traditional and virtual advocacy efforts. NDRN’s online and digital efforts assist it in its traditional efforts, supporting a movement that rallies support for disability rights legislation and reforms. Focusing on issues such as improving accessibility, funding disability services, and opposing harmful policy changes, digital platforms move beyond traditional methods, increase public support, and pressure policymakers to address disability-related issues and enact supportive legislation. NDRN conducts podcasts and workshops educating the public, advocacy groups, and policymakers about disability rights, legal issues, and policy developments. Much of its efforts occurred online and remotely during COVID-19, as with other organizations.
NDRN, like AARP, uses a variety of digital tools and platforms such as social media platforms like X, Facebook, and Instagram, as well as podcasting and webinars, to spread awareness about disability rights, share updates on advocacy efforts, and engage with a broader audience. They also use these platforms to communicate with constituents. Social media campaigns help NDRN reach a wide audience, build community support, and drive engagement with disability rights issues. By amplifying advocacy messages and initiatives, NDRN’s digital efforts allow more constituents to access resources, participate in advocacy actions, and stay informed about disability rights issues. The platforms often include tools for contacting legislators, submitting comments, and joining advocacy networks.
Online communication serves as a powerful tool for fostering collaboration among disability advocacy organizations and networks. By addressing systemic issues and promoting shared advocacy goals, these organizations are able to strengthen their collective efforts. Joint efforts, virtual coalition meetings, and collaborative initiatives are all part of this unified approach. Such collaborative efforts, which combine resources, expertise, and networks, reassure the audience about the strength and unity of the disability advocacy community, leading to more effective and unified actions supporting disability rights.
National Patient Advocate Foundation
The National Patient Advocate Foundation (NPAF) promotes patient-centered care and health equity, much like AARP and NDRN. NPAF uses social media and its websites to drive support for policy changes. The campaigns address issues like insurance coverage, access to affordable medications, and improvements
in healthcare delivery. These digital efforts have had a significant impact on influencing policymakers and stakeholders to prioritize patient-centered reforms and address critical issues in healthcare. They have also helped identify key issues and develop policy recommendations, tailoring their advocacy efforts to address real-world challenges and improve patient outcomes.
NPAF actively uses social media platforms like X, Facebook, and LinkedIn to raise awareness about patient rights. They share updates on their advocacy work. Engaging with their audience through social media and their website, they increase their visibility, foster a community of supporters, amplify patients’ voices, and contribute to more effective advocacy efforts. This effort includes resources for contacting legislators and information about ongoing policy issues.
By equipping individuals with resources, digital platforms empower patients and advocates to participate actively in the legislative process, and they can drive meaningful change. These platforms provide a voice for those who may not have had one in the past, allowing them to share their experiences and advocate for change. They collaborate with other organizations and coalitions through their online presence and resources to address shared goals and amplify advocacy efforts. Through joint efforts and coordination of outreach efforts, collaborative efforts boosted by digital resources will enhance the reach and effectiveness of advocacy efforts by combining resources and networks, leading to a greater impact on policy and practice.
Notes:
- See, for example, Noha Alghamdi and Saeed M. Alghamdi, “The Role of Digital Technology in Curbing COVID-19,” International Journal of Environmental Research and Public Health 19, no. 14 (July 2022): 8287; and “The Rise of Digital Advocacy During COVID-19,” Voices from the Community (blog), Christopher & Dana Reeve Foundation, accessed Dec 13, 2024, blog.christopherreeve.org/en/the-rise-of-digital-advocacy-during-covid-19.
- From author’s own observation and experience in the And see Lisa Klein Vogel and Vee Yeo, “’It’s Not a Cookie-Cutter Scenario Anymore’: the COVID-19 Pandemic and Transitioning to Virtual Work,” Journal of Policy Practice and Research 3 (March 2022): 132–72; Peter Lee et al., “Digital Health COVID-19 Impact Assessment: Lessons Learned and Compelling Needs,” Discussion Paper, National Academy of Medicine, January 18, 2022, nam.edu/digital-health-covid-19-impact-assessment-lessons-learned-and-compelling-needs/; Kristin McDonald, “COVID-19’s impact on advocacy: Virtual versus in-person meetings,” Bulletin, American College of Surgeons, August 4, 2021, www.facs.org/for-medical-professionals/news-publications/news-and-articles/bulletin/2021/08/covid-19s-impact-on-advocacy-virtual-versus-in-person-meetings/; and Legislative Advocacy During the COVID-19 Pandemic (Boston, MA: Community Catalyst, 2021).
- See, for example, Chukwuma Eruchalu et al., “The Expanding Digital Divide: Digital Health Access Inequities during the COVID-19 Pandemic in New York City,” Journal of Urban Health 98, no. 2 (April 2021): 183–86.
- Alghamdi and Alghamdi, “The Role of Digital Technology in Curbing COVID-19”; Junhan Chen and Yuan Wang, “Social Media Use for Health Purposes: Systematic Review,” Journal of Medical Internet Research 23, 5 (May 2021): e17917; and “COVID-19 Virtual Events: Resources for Research, Practice, and Teaching,” Columbia University Department of Medical Humanities and Ethics, accessed December 1, 2024, www.mhe.cuimc.columbia.edu/ethics/resources/covid-19-ethics-justice-resources/covid-19-virtual-events.
- Brian William Hasselfeld, “Benefits of Telemedicine,” Johns Hopkins Medicine, accessed December 1, 2024, hopkinsmedicine.org/health/treatment-tests-and-therapies/benefits-of-telemedicine; and “Why use telehealth?,” Department of Health and Human Services, last modified February 29, 2024, telehealth.hhs.gov/patients/why-use-telehealth.
- Leonie Heyworth, Nilesh Shah, and Kevin Galpin, “20 Years of Telehealth in the Veterans Health Administration: Taking Stock of Our Past and Charting Our Future,” Supplement 1, Journal of General Internal Medicine 39 (February 2024): 5–8.
- “Telehealth,” California State Council on Developmental Disabilities, accessed December 5, 2024, scdd.ca.gov/wp-content/uploads/sites/33/2016/10/Telehealth-2-18-16-pdf.
- Cecilia Aguiar-Curry, Assembly Majority Leader, District 4, “Assemblymember Cecilia Aguiar-Curry Historic Telehealth Access Bill Passes Assembly, 78-0 Bipartisan Vote,” news release, June 2, 2021, a04.asmdc.org/press-releases/20210602-assemblymember-cecilia-aguiar-curry-historic-telehealth-access-bill-passes; and “AB-32 (2021–2022),” California Legislative Information, September 26, 2022, leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202120220AB32.
- Ibid.
- Carolina Wannheden et al., “Digital Health Technologies Enabling Partnerships in Chronic Care Management: Scoping Review,” Journal of Medical Internet Research 24, 8 (August 2022): e38980; and Victor C. Ezeamii et al., “Revolutionizing Healthcare: How Telemedicine Is Improving Patient Outcomes and Expanding Access to Care,” Cureus 16, no. 7 (July 2024): e63881.
- Zhouxiao Li et al., “Artificial Intelligence in Dermatology Image Analysis: Current Developments and Future Trends,” Journal of Clinical Medicine 11, 22 (November 2022): 6826; and Mohamed Khalifa and Mona Albadawy, “AI in diagnostic imaging: Revolutionising accuracy and efficiency,” Computer Methods and Programs in Biomedicine Update 5 (2024): 100146.
- Anna Smak Gregoor et al., “Artificial intelligence in mobile health for skin cancer diagnostics at home (AIM HIGH): a pilot feasibility study,” eClinicalMedicine 60 (June 2023): 102019; and Shaghayegh Shajari, “The Emergence of AI-Based Wearable Sensors for Digital Health Technology: A Review,” Sensors (Basel) 23, no. 23 (November 2023): 9498.
- Reid Blackman, “AI can be racist, sexist and What should we do about it?,” interview by Zachary B. Wolf, CNN, March 18, 2023, www.cnn.com/2023/03/18/politics/ai-chatgpt-racist-what-matters/index.html. And see Reid Blackman, Ethical Machines: Your Concise Guide to Totally Unbiased, Transparent, and Respectful AI (Brighton, MA: Harvard Business Review Press, 2022).
- “Racial Bias in Health Care Artificial Intelligence,” National Institute for Healthcare Management Foundation, September 30, 2021, org/publications/artificial-intelligences-racial-bias-in-health-care.
- Arianna Johnson, “Racism And AI: Here’s How It’s Been Criticized For Amplifying Bias,” Forbes, May 25, 2023, forbes.com/sites/ariannajohnson/2023/05/25/racism-and-ai-heres-how-its-been-criticized-for-amplifying-bias/; and Stefano Canali, Viola Schiaffonati, and Andrea Aliverti, “Challenges and recommendations for wearable devices in digital health: Data quality, interoperability, health equity, fairness,” PLOS Digital Health 1, no. 10 (October 2022): e0000104.
- Christina Taylan and Lutz Weber, “‘Don’t let me be misunderstood’: communication with patients from a different cultural background,” Pediatric Nephrology 38, no. 3 (August 2022): 643–49.
- Nivisha Parag, Rowen Govender, and Saadiya Bibi Ally, “Promoting Cultural Inclusivity in Healthcare Artificial Intelligence: A Framework for Ensuring Diversity,” Health Management, Policy and Innovation 8, no. 3 (2023).
- Karen Feldscher, “Need help managing diabetes? These students made an app for that,” Harvard T.H. Chan School of Public Health, May 13, 2022, hsph.harvard.edu/news/features/need-help-managing-diabetes-these-students-made-an-app-for-that/; and Medtronic, “Artificial Intelligence-Powered Sugar. IQ(TM) Diabetes Management App Developed by Medtronic and IBM Watson Health Now Commercially Available,” news release, June 21, 2018, news.medtronic.com/2018-06-22-Artificial-Intelligence-Powered-Sugar-IQ-TM-Diabetes-Management-App-Developed-by-Medtronic-and-IBM-Watson-Health-Now-Commercially-Available.
- “Your diabetes data, simply there,” mySugr, accessed November 29, 2024, mysugr.com/en-us.
- Mike Hoskins, “mySugr App Review: Taming Your Diabetes Monster,” Healthline, September 6, 2021, healthline.com/diabetesmine/mysugr-app-review-taming-diabetes-monster; and “Lose weight and prevent diabetes from anywhere,” Lark Health, accessed November 29, 2024, www.lark.com/signup/for-individuals.
- Natalie Stein, “Artificial Intelligence—AI | Lark Health,” Lark Health, October 30, 2018, lark.com/resources/lark-health-ai-artificial-intelligence.
- Christina Harrington et al., “Working at the Intersection of Race, Disability, and Accessibility,” Faculty Conference Papers and Presentations 82 (2023): digitalcommons.bucknell.edu/fac_conf/82/; and Amelia Whitman et al., Addressing Social Determinants of Health: Examples of Successful Evidence-Based Strategies and Current Federal Efforts (Washington, DC: Assistant Secretary for Planning and Evaluation, Office of Health Policy, U.S. Department of Health and Human Services, 2022).
- James Weinstein et al., eds., Communities in Action: Pathways to Health Equity (Washington, DC: National Academies Press, 2017).
- “Using AI to prevent blindness,” Google Health, accessed November 29, 2024, google/caregivers/arda/.
- Shuroug Alowais et al., “Revolutionizing healthcare: the role of artificial intelligence in clinical practice,” BMC Medical Education 23, no. 1 (September 2023): 689.
- Wullianallur Raghupathi and Viju Raghupathi, “Big data analytics in healthcare: promise and potential,” Health Information Science and Systems 2, 3 (February 2014): 1–10; and Seema Yelne et al., “Harnessing the Power of AI: A Comprehensive Review of Its Impact and Challenges in Nursing Science and Healthcare,” Cureus 15, no. 11 (November 2023): e49252.
- Yelne et , “Harnessing the Power of AI.”
- Whitman et , Addressing Social Determinants of Health.
- “Helping billions of people be healthier,” Google Health, accessed November 29, 2024, google.
- Sebastian Garcia-Saiso et , “Artificial Intelligence as a Potential Catalyst to a More Equitable Cancer Care,” JMIR Cancer 10 (2024): e57276.
- Elizabeth Gehrman, “How Generative AI Is Transforming Medical Education,” Harvard Medicine, October 2024, hms.harvard.edu/articles/how-generative-ai-transforming-medical-education; and Mohammad Muzaffar Mir et al., “Application of Artificial Intelligence in Medical Education: Current Scenario and Future Perspectives,” Journal of Advances in Medical Education & Professionalism 11, no. 3 (July 2023): 133–40.
- “Everything you need to deliver ,” Healthie, accessed November 29, 2024, www.gethealthie.com/.
- Kelly DuBois, “Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again,” review of Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again, by Eric Topol, Perspectives on Science and Christian Faith 71, no. 3 (September 2019).
- Emma Gurevich, Basheer El Hassan, and Christo El Morr, “Equity within AI systems: What can health leaders expect?,” Healthcare Management Forum 36, 2 (October 2022): 119–24; and Jessica Morley et al., “Governing Data and Artificial Intelligence for Health Care: Developing an International Understanding,” JMIR Formative Research 6, no. 1 (January 2022): e31623.
- Priya Bathija and Sarah Swank, “Digital Health Equity: Narrowing the Digital Divide by Ensuring a Fair, Equitable, and Just Opportunity to Access Digital Health,” Journal of Health and Life Sciences Law 16, 1 (May 2022).
- Adil Hussain Seh et , “Healthcare Data Breaches: Insights and Implications,” Healthcare (Basel) 8, no. 2 (May 2020): 133.
- Lorie Donelle et , “Use of digital technologies for public health surveillance during the COVID-19 pandemic: A scoping review,” Digital Health 9 (May 2023): 1–22.
- Jane Zhu and Zirui Song, “The Growth of Private Equity in US Health Care: Impact and Outlook,” Expert Voices, National Institute for Healthcare Management Foundation, accessed December 3, 2024, nihcm.org/publications/the-growth-of-private-equity-in-us-health-care-impact-and-outlook.
- Inderpreet Sawhney, Delia Ferreira Rubio, and Houssam Al Wazzan, “Why corporate integrity is key to shaping the use of AI,” World Economic Forum, October 14, 2024, weforum.org/stories/2024/10/corporate-integrity-future-ai-regulation/.
- See, for example, Sebastian Garcia-Saiso et , “Artificial Intelligence as a Potential Catalyst to a More Equitable Cancer Care,” JMIR Cancer 10 (2024):1–8.
- Jennifer Lubell, “Third-Party Data Tracking on Hospital Websites Raises Patient Privacy Concerns,” Journal of AHIMA, American Health Information Management Association, August 21, 2023, journal.ahima.org/page/third-party-data-tracking-on-hospital-websites-raises-patient-privacy-concerns.
- William Shakespeare, The Tempest, Folger Shakespeare Library, 1. 217, accessed November 29, 2024, https://www.folger.edu/explore/shakespeares-works/the-tempest/read/5/1/; and Aldous Huxley, Brave New World (London: Chatto & Windus, 1932).