IDEAS home Printed from https://ideas.repec.org/a/plo/pdig00/0001212.html

Hierarchy and hope: Exploring AI’s role in medicine through a thematic analysis of online discourse

Author

Listed:
  • Johan Pushani
  • Sherwin Rajkumar
  • Alishya Burrell
  • Erin Peebles
  • Amrit Kirpalani

Abstract

The healthcare community remains divided on the benefits of artificial intelligence (AI) in medicine. In this qualitative study, we sought to better understand the perceived opportunities and threats of AI among premedical students, medical students, and physicians. We conducted a thematic analysis on Reddit, a social platform where candid opinions are often shared. Posts from the r/premed, r/medicalschool, and r/medicine subreddits were searched using the terms “AI”, “chatGPT”, “openAI”, and “artificial intelligence”. We analyzed 2403 comments across 47 threads from December 2022 to August 2023. A coding scheme was developed manually following Braun and Clarke’s (2006) framework, and common themes were extracted. The main themes identified centered on AI enhancement versus replacement. Careers perceived to be lower in the medical social hierarchy were considered most at risk of replacement. AI was thought to first replace non-medical jobs, followed by mid-levels, and then primary care and diagnostic specialties, with specialists and surgeons affected last. Some contributors emphasized that AI could never replace a physician’s compassion and nuanced clinical judgment. Others viewed AI as a tool to enhance efficiency, particularly in tasks such as studying, note writing, screening, and triage. Although verifying the credentials of commenters on online forums poses a challenge, platforms like Reddit offer a valuable opportunity to understand nuanced attitudes and perceptions regarding AI in medicine. Online forums allow for a unique understanding of the impressions of AI in medicine. While AI was generally well-received, we identified a key finding: a socially hierarchical, biased form of thinking among healthcare professionals. The perpetuation of this biased mindset may contribute to role devaluation, mistrust, and collaboration challenges within healthcare teams–ultimately impacting patient care. To fully leverage AI’s potential in medicine, it is critical to acknowledge and address potentially biased perceptions within the healthcare community.Author summary: Artificial intelligence (AI) tools, like ChatGPT, are rapidly becoming part of healthcare, yet there is still uncertainty about whether AI will primarily support clinicians or replace them. In this study, we analyzed public online discussions to better understand how these groups talk about AI in medicine. Using thematic analysis, we reviewed 2,403 comments across 47 Reddit threads from December 2022 to August 2023. We found two dominant themes. First, many users viewed AI as an enhancement tool, supporting studying, writing, clinical documentation, and early screening or triage - while emphasizing that AI can be inaccurate and requires human oversight. Second, others emphasized AI as a potential replacement force, often predicting a “hierarchy” of job risk where roles perceived as lower in the medical social structure were viewed as more vulnerable to automation than specialist physicians. These findings suggest that opinions about AI are shaped not only by technology, but also by professional identity and hierarchy, factors that may influence collaboration and the equitable implementation of AI in healthcare.

Suggested Citation

  • Johan Pushani & Sherwin Rajkumar & Alishya Burrell & Erin Peebles & Amrit Kirpalani, 2026. "Hierarchy and hope: Exploring AI’s role in medicine through a thematic analysis of online discourse," PLOS Digital Health, Public Library of Science, vol. 5(1), pages 1-12, January.
  • Handle: RePEc:plo:pdig00:0001212
    DOI: 10.1371/journal.pdig.0001212
    as

    Download full text from publisher

    File URL: https://journals.plos.org/digitalhealth/article?id=10.1371/journal.pdig.0001212
    Download Restriction: no

    File URL: https://journals.plos.org/digitalhealth/article/file?id=10.1371/journal.pdig.0001212&type=printable
    Download Restriction: no

    File URL: https://libkey.io/10.1371/journal.pdig.0001212?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pdig00:0001212. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: digitalhealth (email available below). General contact details of provider: https://journals.plos.org/digitalhealth .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.