Author
Listed:
- Sribala Vidyadhari Chinta
- Zichong Wang
- Avash Palikhe
- Xingyu Zhang
- Ayesha Kashif
- Monique Antoinette Smith
- Jun Liu
- Wenbin Zhang
Abstract
Artificial intelligence (AI) is rapidly advancing in healthcare, enhancing the efficiency and effectiveness of services across various specialties, including cardiology, ophthalmology, dermatology, emergency medicine, etc. AI applications have significantly improved diagnostic accuracy, treatment personalization, and patient outcome predictions by leveraging technologies such as machine learning, neural networks, and natural language processing. However, these advancements also introduce substantial ethical and fairness challenges, particularly related to biases in data and algorithms. These biases can lead to disparities in healthcare delivery, affecting diagnostic accuracy and treatment outcomes across different demographic groups. This review paper examines the integration of AI in healthcare, highlighting critical challenges related to bias and exploring strategies for mitigation. We emphasize the necessity of diverse datasets, fairness-aware algorithms, and regulatory frameworks to ensure equitable healthcare delivery. The paper concludes with recommendations for future research, advocating for interdisciplinary approaches, transparency in AI decision-making, and the development of innovative and inclusive AI applications.Author summary: In this paper, we investigate the rapid advancement of artificial intelligence (AI) in healthcare, focusing on its role in improving efficiency and effectiveness across specialties such as cardiology, ophthalmology, and dermatology. We note that AI technologies enhance diagnostic accuracy, treatment personalization, and patient outcome predictions. However, these developments also pose significant ethical challenges, particularly concerning biases in data and algorithms that can create disparities in healthcare delivery. We examine the integration of AI in healthcare, highlighting the critical challenges related to bias and exploring strategies for mitigation. We emphasize the need for diverse datasets, fairness-aware algorithms, and regulatory frameworks to ensure equitable healthcare delivery. Our paper concludes with recommendations for future research, advocating for interdisciplinary approaches, transparency in AI decision-making, and the development of innovative and inclusive AI applications.
Suggested Citation
Sribala Vidyadhari Chinta & Zichong Wang & Avash Palikhe & Xingyu Zhang & Ayesha Kashif & Monique Antoinette Smith & Jun Liu & Wenbin Zhang, 2025.
"AI-driven healthcare: Fairness in AI healthcare: A survey,"
PLOS Digital Health, Public Library of Science, vol. 4(5), pages 1-27, May.
Handle:
RePEc:plo:pdig00:0000864
DOI: 10.1371/journal.pdig.0000864
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pdig00:0000864. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: digitalhealth (email available below). General contact details of provider: https://journals.plos.org/digitalhealth .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.