IDEAS home Printed from https://ideas.repec.org/a/spr/jcsosc/v8y2025i3d10.1007_s42001-025-00386-8.html
   My bibliography  Save this article

Towards fair AI: a review of bias and fairness in machine intelligence

Author

Listed:
  • Venkatesha Kurumayya

Abstract

As artificial intelligence and machine learning (ML) have grown in popularity over the past few decades, they are now being applied to a multitude of fields. While making decisions in this domain, the limitations of bias and fairness have become very important issues for researchers and engineers. As a result, it is crucial to be concerned about the potential harmfulness of data and algorithms while choosing them for an AI application. In this view, this paper introduces a review of the definition of bias with a few real-life examples in ML, different types of bias, methods to mitigate bias, fairness in ML, some tools to detect bias in ML algorithms, metrics for fairness measurement, and some typical datasets used by researchers to study the fairness of ML algorithms. The main contribution of this paper is to present the concepts of bias and fairness, tools, datasets, and metrics for technical and non-technical people such as philosophers, policymakers, lawyers, and social scientists for further related study and research in their domain of interest relevant to ML algorithms. Some future research directions in the area of fairness in ML algorithms are highlighted.

Suggested Citation

  • Venkatesha Kurumayya, 2025. "Towards fair AI: a review of bias and fairness in machine intelligence," Journal of Computational Social Science, Springer, vol. 8(3), pages 1-26, August.
  • Handle: RePEc:spr:jcsosc:v:8:y:2025:i:3:d:10.1007_s42001-025-00386-8
    DOI: 10.1007/s42001-025-00386-8
    as

    Download full text from publisher

    File URL: http://link.springer.com/10.1007/s42001-025-00386-8
    File Function: Abstract
    Download Restriction: Access to the full text of the articles in this series is restricted.

    File URL: https://libkey.io/10.1007/s42001-025-00386-8?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Yan Gao & Yan Cui, 2020. "Deep transfer learning for reducing health care disparities arising from biomedical data inequality," Nature Communications, Nature, vol. 11(1), pages 1-8, December.
    2. Yan Gao & Yan Cui, 2020. "Author Correction: Deep transfer learning for reducing health care disparities arising from biomedical data inequality," Nature Communications, Nature, vol. 11(1), pages 1-1, December.
    3. Nathan Kallus & Xiaojie Mao & Angela Zhou, 2022. "Assessing Algorithmic Fairness with Unobserved Protected Class Using Data Combination," Management Science, INFORMS, vol. 68(3), pages 1959-1981, March.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Siqiong Yao & Fang Dai & Peng Sun & Weituo Zhang & Biyun Qian & Hui Lu, 2024. "Enhancing the fairness of AI prediction models by Quasi-Pareto improvement among heterogeneous thyroid nodule population," Nature Communications, Nature, vol. 15(1), pages 1-13, December.
    2. Yan Gao & Yan Cui, 2022. "Clinical time-to-event prediction enhanced by incorporating compatible related outcomes," PLOS Digital Health, Public Library of Science, vol. 1(5), pages 1-10, May.
    3. Hu, Chenxi & Zhang, Jun & Yuan, Hongxia & Gao, Tianlu & Jiang, Huaiguang & Yan, Jing & Wenzhong Gao, David & Wang, Fei-Yue, 2022. "Black swan event small-sample transfer learning (BEST-L) and its case study on electrical power prediction in COVID-19," Applied Energy, Elsevier, vol. 309(C).
    4. Tsegahun Manyazewal & Gail Davey & Charlotte Hanlon & Melanie J. Newport & Michael Hopkins & Jenni Wilburn & Sahar Bakhiet & Leon Mutesa & Agumasie Semahegn & Esubalew Assefa & Abebaw Fekadu, 2024. "Innovative technologies to address neglected tropical diseases in African settings with persistent sociopolitical instability," Nature Communications, Nature, vol. 15(1), pages 1-17, December.
    5. Yanqin Fan & Brendan Pass & Xuetao Shi, 2025. "Partial Identification in Moment Models with Incomplete Data via Optimal Transport," Papers 2503.16098, arXiv.org.
    6. Yechan Park & Yuya Sasaki, 2024. "The Informativeness of Combined Experimental and Observational Data under Dynamic Selection," Papers 2403.16177, arXiv.org.
    7. Nathan Kallus, 2023. "Treatment Effect Risk: Bounds and Inference," Management Science, INFORMS, vol. 69(8), pages 4579-4590, August.
    8. Cohle, Zachary & Ortega, Alberto, 2023. "The effect of the opioid crisis on patenting," Journal of Economic Behavior & Organization, Elsevier, vol. 214(C), pages 493-521.
    9. Andrew Bennett & Nathan Kallus & Xiaojie Mao & Whitney Newey & Vasilis Syrgkanis & Masatoshi Uehara, 2022. "Inference on Strongly Identified Functionals of Weakly Identified Functions," Papers 2208.08291, arXiv.org, revised Jun 2023.
    10. Benjamin Lu & Jia Wan & Derek Ouyang & Jacob Goldin & Daniel E. Ho, 2024. "Quantifying the Uncertainty of Imputed Demographic Disparity Estimates: The Dual Bootstrap," NBER Chapters, in: Race, Ethnicity, and Economic Statistics for the 21st Century, National Bureau of Economic Research, Inc.
    11. Nur Sunar & Jayashankar M. Swaminathan, 2022. "Socially relevant and inclusive operations management," Production and Operations Management, Production and Operations Management Society, vol. 31(12), pages 4379-4392, December.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:jcsosc:v:8:y:2025:i:3:d:10.1007_s42001-025-00386-8. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.