Author
Listed:
- Miroslav Nikolić
(Open Institute of Technology, University of Malta, XBX 1425 Ta’ Xbiex, Malta)
- Danilo Nikolić
(Faculty of Technical Sciences, University of Novi Sad, 21000 Novi Sad, Serbia)
- Miroslav Stefanović
(Faculty of Technical Sciences, University of Novi Sad, 21000 Novi Sad, Serbia)
- Sara Koprivica
(Faculty of Technical Sciences, University of Novi Sad, 21000 Novi Sad, Serbia)
- Darko Stefanović
(Faculty of Technical Sciences, University of Novi Sad, 21000 Novi Sad, Serbia)
Abstract
Probability calibration is commonly utilized to enhance the reliability and interpretability of probabilistic classifiers, yet its potential for reducing algorithmic bias remains under-explored. In this study, the role of probability calibration techniques in mitigating bias associated with sensitive attributes, specifically country of origin, within binary classification models is investigated. Using a real-world lead-generation 2853 × 8 matrix dataset characterized by substantial class imbalance, with the positive class representing 1.4% of observations, several binary classification models were evaluated and the best-performing model was selected as the baseline for further analysis. The evaluated models included Binary Logistic Regression with polynomial degrees of 1, 2, 3, and 4, Random Forest, and XGBoost classification algorithms. Three widely used calibration methods, Platt scaling, isotonic regression, and temperature scaling, were then used to assess their impact on both probabilistic accuracy and fairness metrics of the best-performing model. The findings suggest that post hoc calibration can effectively reduce the influence of sensitive features on predictions by improving fairness without compromising overall classification performance. This study demonstrates the practical value of incorporating calibration as a straightforward and effective fairness intervention within machine learning workflows.
Suggested Citation
Miroslav Nikolić & Danilo Nikolić & Miroslav Stefanović & Sara Koprivica & Darko Stefanović, 2025.
"Mitigating Algorithmic Bias Through Probability Calibration: A Case Study on Lead Generation Data,"
Mathematics, MDPI, vol. 13(13), pages 1-23, July.
Handle:
RePEc:gam:jmathe:v:13:y:2025:i:13:p:2183-:d:1694444
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:13:y:2025:i:13:p:2183-:d:1694444. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.