Author
Listed:
- Junaid Muhammad
- Mitra Ghergherehchi
- Shiraz Ali
- Ho Seung Song
- Nasir Rahim
Abstract
Parkinson’s disease (PD) is a neurodegenerative disorder characterized by motor and non-motor symptoms, including tremor, rigidity, and postural instability. Machine learning (ML) models have shown promise for the diagnosis of PD; however, many existing approaches do not explicitly address fairness and robustness. As a result, these models can lead to biased outcomes across demographic groups and vulnerability to adversarial attacks. In this study, we used the Parkinson’s Progression Markers Initiative (PPMI) cohort, which includes clinical and demographic information from 1,084 participants spanning diverse age, sex, and racial groups. Our study addresses the key challenge of developing robust and equitable ML models to diagnose the progression of PD. We evaluated the performance of two fairness-optimized classifiers, namely, Random Forest (RF) and Decision Tree (DT). To evaluate model vulnerability, we applied adversarial techniques, specifically label leakage and data poisoning attacks, which simulate intentional or erroneous data alterations that can amplify biases and degrade accuracy. These adversarial manipulations substantially degraded model performance; specifically, DT accuracy declined by more than 10% between sensitive groups. The accuracy of the RF model decreased by 20%. Moreover, under attack, fairness metrics such as Statistical Parity Difference (SPD), which looks at differences in the chances of getting a positive prediction across demographic groups, and Equal Opportunity Difference (EOD) for differences in true positive rates between groups, both showed a decline. This pattern suggests that adversarial perturbations increased bias and widened performance disparities across demographic groups. Our results demonstrated that adversarial attacks increased the incidence of false positives and false negatives, thereby lowering the accuracy and fairness of the PD diagnostic predictions. These findings underscore the urgent need for robust and fairness-aware defenses in medical AI to mitigate racial, age, and gender disparities and ensure a reliable clinical decision-making process.
Suggested Citation
Junaid Muhammad & Mitra Ghergherehchi & Shiraz Ali & Ho Seung Song & Nasir Rahim, 2026.
"Trustworthy AI for medical decisions: Adversarially robust and fair machine learning prediction for Parkinson’s disease,"
PLOS ONE, Public Library of Science, vol. 21(2), pages 1-31, February.
Handle:
RePEc:plo:pone00:0342062
DOI: 10.1371/journal.pone.0342062
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pone00:0342062. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: plosone (email available below). General contact details of provider: https://journals.plos.org/plosone/ .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.