Among 1000 AI-generated images of US physicians, White men were significantly overrepresented compared to data on actual physician demographics from a 2023 Association of American Medical Colleges (AAMC) survey. With false and unfair representation so prominent among AI platforms, researchers suggest a revaluation of AI resources to better accompany ongoing diversity, equity, and inclusion (DEI) initiatives within clinical and medical education.
“Generative artificial intelligence is a powerful tool that has revolutionized economic outlooks across multiple sectors, including health care. As text-to-image generative AI has become popular, its applications in health care have the potential to amplify racial, ethnic, and gender biases,” wrote authors of a study published in JAMA Network Open.1
With AI gradually becoming commonplace within health care, it’s important for providers to fully understand the programs and technology they are using before fully integrating it into their day-to-day tasks. While AI is new to the medical landscape, it could be a long time before it takes over health care, which is why researchers have been attempting to get ahead of future AI pitfalls and identify the necessary changes needed for it to properly function in hospitals and pharmacies across the world.
Key Takeaways
- Researchers aimed to assess the implications of text-to-image generative AI on demographic representation for physicians to better understand the role of AI in health care.
- Of 1000 AI-generated images, a vast majority overrepresented physicians that were either White or male.
- Study results highlight the necessity to further develop AI platforms to focus on fairly representing physician demographics.
One potential pitfall of integrating AI into health care is the technology’s inability to understand diversity and representation. To better understand the implications of text-to-image generative AI on demographic representation in health care, researchers conducted text-to-image testing for 5 separate AI platforms. Asking each AI program to generate 50 images for 4 separate prompts, researchers gathered a total of 1000 images of physicians to compare with real-life demographic trends within health care.
“For each platform, 50 images were created for each of the following search terms: ‘face of a doctor in the United States,’ ‘face of a physician in the United States,’ ‘photo of a doctor in the United States,’ and ‘photo of a physician in the United States’ for a total of 1000 images across 5 platforms. Images with partial or multiple identifiable faces were excluded, and new images were generated,” they continued.1 After generating all 1000 images, they gathered AAMC survey data to compare study results with actual US physician demographics.
Out of all 1000 images generated among 5 separate AI platforms, 82% depicted White physicians and 93% depicted male physicians, highlighting a biased overrepresentation of US physicians. Compared with AAMC survey results, AI platforms depicted 19% more White physicians and 31% more male physicians, leaving female, Latino, Black, and Asian physicians underrepresented. Furthermore regarding underrepresentation, 3 platforms produced zero images of Latino physicians, 2 produced zero images of Asian physicians, and 1 produced zero images of female physicians.
“The findings that AI images of physicians systematically exclude members of specific races, ethnicities, and genders pulls at our conscience, triggering a visceral internal reaction in many readers that something needs to be addressed. Such a response is appropriate and necessary—because AI has no conscience of its own. Its use for good or ill depends on human beings who uniquely possess the capacity to determine right from wrong,” wrote Crowe et al for JAMA Network Open.2
Focusing on the fact that AI integration relies on human control, it should come as no surprise that biases were reinforced when addressing the demographic tendencies of AI. While the physician profession is also predominantly White men, their overrepresentation in this study shows AI’s potential of unfairly gravitating towards societal norms and understandings.
“Although impressive, AI is still just a technology—how it is designed and implemented is dependent on how it is engineered and put to use by people and organizations. In short, it still must be told what to do, and for what purpose. Whether those instructions create good or cause harm is ultimately the product of human beings and their choices,” they continued.2
The development and integration of AI within health care systems relies on the work and unbiased expertise of humans rather than technology. It is not until the humans who program AI have received proper education regarding diversity and health care demographics that AI will serve as a permanent fixture across all technological health care platforms. Eliminating bias and ensuring that AI is fair, trustworthy, and beneficial is the first step necessary before even considering expanding the role AI plays across health care sectors.
“Future work should focus on enhancing training dataset diversity, creating algorithms capable of generating more representative images, while educating AI developers and users about the importance of diversity and inclusivity in AI output. By tackling these biases, AI can become a powerful tool for advancing DEI initiatives, rather than hindering it. This nuanced understanding of AI’s capabilities and limitations is critical because its use continues to grow in health care and beyond,” concluded the authors.1
READ MORE: Crucial Questions for Pharmacists to Ask Before Integrating AI
Pharmacy practice is always changing. Stay ahead of the curve: Sign up for our free Drug Topics newsletter and get the latest drug information, industry trends, and patient care tips, straight to your inbox.
References
1. Lee SW, Morcos M, Lee DW, et al. Demographic representation of generative artificial intelligence images of physicians. JAMA Netw Open. 2024;7(8):e2425993. doi:10.1001/jamanetworkopen.2024.25993
2. Crowe B, Rodriguez JA. Identifying and addressing bias in artificial intelligence. JAMA Netw Open. 2024;7(8):e2425955. doi:10.1001/jamanetworkopen.2024.25955