IDEAS home Printed from https://ideas.repec.org/a/plo/pdig00/0000516.html
   My bibliography  Save this article

Identifying bias in models that detect vocal fold paralysis from audio recordings using explainable machine learning and clinician ratings

Author

Listed:
  • Daniel M Low
  • Vishwanatha Rao
  • Gregory Randolph
  • Phillip C Song
  • Satrajit S Ghosh

Abstract

Detecting voice disorders from voice recordings could allow for frequent, remote, and low-cost screening before costly clinical visits and a more invasive laryngoscopy examination. Our goals were to detect unilateral vocal fold paralysis (UVFP) from voice recordings using machine learning, to identify which acoustic variables were important for prediction to increase trust, and to determine model performance relative to clinician performance. Patients with confirmed UVFP through endoscopic examination (N = 77) and controls with normal voices matched for age and sex (N = 77) were included. Voice samples were elicited by reading the Rainbow Passage and sustaining phonation of the vowel "a". Four machine learning models of differing complexity were used. SHapley Additive exPlanations (SHAP) was used to identify important features. The highest median bootstrapped ROC AUC score was 0.87 and beat clinician’s performance (range: 0.74–0.81) based on the recordings. Recording durations were different between UVFP recordings and controls due to how that data was originally processed when storing, which we can show can classify both groups. And counterintuitively, many UVFP recordings had higher intensity than controls, when UVFP patients tend to have weaker voices, revealing a dataset-specific bias which we mitigate in an additional analysis. We demonstrate that recording biases in audio duration and intensity created dataset-specific differences between patients and controls, which models used to improve classification. Furthermore, clinician’s ratings provide further evidence that patients were over-projecting their voices and being recorded at a higher amplitude signal than controls. Interestingly, after matching audio duration and removing variables associated with intensity in order to mitigate the biases, the models were able to achieve a similar high performance. We provide a set of recommendations to avoid bias when building and evaluating machine learning models for screening in laryngology.Author summary: The diagnosis of certain voice disorders can involve costly and time-consuming methods such as video laryngoscopy. An alternative is to screen using machine learning models that predict risk given just a short audio recording from a mobile device. However, these models can be biased if they detect recording idiosyncrasies of a given dataset that would not generalize to new samples with a different recording protocol, making the model unusable. These types of biases are not always evaluated in clinical machine learning studies. We found that a model we trained to detect unilateral vocal fold paralysis from healthy voices from brief audio recordings was biased: patients with a softer voice may have been induced to over-project their voice to obtain clearer recordings or the gain on the microphone may have been increased only for these participants, creating a bias that is unlikely to generalize. We demonstrate how to detect such biases using explainable machine learning and clinician ratings as well as how to potentially mitigate the effect of the bias. We also provide recommendations for identifying and mitigating bias in machine learning models that use audio recordings for screening in laryngology in general.

Suggested Citation

  • Daniel M Low & Vishwanatha Rao & Gregory Randolph & Phillip C Song & Satrajit S Ghosh, 2024. "Identifying bias in models that detect vocal fold paralysis from audio recordings using explainable machine learning and clinician ratings," PLOS Digital Health, Public Library of Science, vol. 3(5), pages 1-27, May.
  • Handle: RePEc:plo:pdig00:0000516
    DOI: 10.1371/journal.pdig.0000516
    as

    Download full text from publisher

    File URL: https://journals.plos.org/digitalhealth/article?id=10.1371/journal.pdig.0000516
    Download Restriction: no

    File URL: https://journals.plos.org/digitalhealth/article/file?id=10.1371/journal.pdig.0000516&type=printable
    Download Restriction: no

    File URL: https://libkey.io/10.1371/journal.pdig.0000516?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Vikas Mittal & R. K. Sharma, 2021. "Deep Learning Approach for Voice Pathology Detection and Classification," International Journal of Healthcare Information Systems and Informatics (IJHISI), IGI Global, vol. 16(4), pages 1-30, October.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.

      More about this item

      Statistics

      Access and download statistics

      Corrections

      All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pdig00:0000516. See general information about how to correct material in RePEc.

      If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

      If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

      If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

      For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: digitalhealth (email available below). General contact details of provider: https://journals.plos.org/digitalhealth .

      Please note that corrections may take a couple of weeks to filter through the various RePEc services.

      IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.