IDEAS home Printed from https://ideas.repec.org/a/plo/pdig00/0000386.html
   My bibliography  Save this article

Fairness as an afterthought: An American perspective on fairness in model developer-clinician user collaborations

Author

Listed:
  • John Banja
  • Judy Wawira Gichoya
  • Nicole Martinez-Martin
  • Lance A Waller
  • Gari D Clifford

Abstract

Numerous ethics guidelines have been handed down over the last few years on the ethical applications of machine learning models. Virtually every one of them mentions the importance of “fairness” in the development and use of these models. Unfortunately, though, these ethics documents omit providing a consensually adopted definition or characterization of fairness. As one group of authors observed, these documents treat fairness as an “afterthought” whose importance is undeniable but whose essence seems strikingly elusive. In this essay, which offers a distinctly American treatment of “fairness,” we comment on a number of fairness formulations and on qualitative or statistical methods that have been encouraged to achieve fairness. We argue that none of them, at least from an American moral perspective, provides a one-size-fits-all definition of or methodology for securing fairness that could inform or standardize fairness over the universe of use cases witnessing machine learning applications. Instead, we argue that because fairness comprehensions and applications reflect a vast range of use contexts, model developers and clinician users will need to engage in thoughtful collaborations that examine how fairness should be conceived and operationalized in the use case at issue. Part II of this paper illustrates key moments in these collaborations, especially when inter and intra disagreement occurs among model developer and clinician user groups over whether a model is fair or unfair. We conclude by noting that these collaborations will likely occur over the lifetime of a model if its claim to fairness is to advance beyond “afterthought” status.Author summary: This essay has two parts. The first part explains why a universal, all-inclusive definition of fairness that could ethically inform, justify, and standardize the ways machine learning models operationalize fairness has not emerged, at least in the United States. This explains to some degree why prominent healthcare groups that have offered ethical guidelines or recommendations for machine learning development seem to treat fairness as vitally important yet gloss over attempts to define it. The second part of this essay traces the implications of a failure to adopt a one-size-fits-all definition and how that failure can affect the moral contours of the model developer-clinician user relationship. The importance of this conversation is heightened by the fact that machine learning models are virtually unregulated in the United States outside of general safety considerations; no methodological framework for identifying fairness-related issues and incorporating mitigation techniques in machine learning design exists; model developers might not be particularly sensitive towards considering how fairness plays out in their model; and “honest” disagreement can exist between model developers and clinician users over whether a given model is fair or unfair. We conclude by noting that if achieving algorithmic “fairness” is as challenging as we believe it to be, then 1) conceptualizations of fairness will be highly dependent on the specific use case under scrutiny for their content, 2) model developers and clinician users will need to be keenly sensitive as to how fairness impacts patient populations in those cases, and 3) model developers, clinician users, and the populations impacted by the model will need to engage in collaborative efforts throughout the life of their models that aim at operationalizing and realizing justifiable comprehensions and applications of fairness practices.

Suggested Citation

  • John Banja & Judy Wawira Gichoya & Nicole Martinez-Martin & Lance A Waller & Gari D Clifford, 2023. "Fairness as an afterthought: An American perspective on fairness in model developer-clinician user collaborations," PLOS Digital Health, Public Library of Science, vol. 2(11), pages 1-15, November.
  • Handle: RePEc:plo:pdig00:0000386
    DOI: 10.1371/journal.pdig.0000386
    as

    Download full text from publisher

    File URL: https://journals.plos.org/digitalhealth/article?id=10.1371/journal.pdig.0000386
    Download Restriction: no

    File URL: https://journals.plos.org/digitalhealth/article/file?id=10.1371/journal.pdig.0000386&type=printable
    Download Restriction: no

    File URL: https://libkey.io/10.1371/journal.pdig.0000386?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. María Agustina Ricci Lara & Rodrigo Echeveste & Enzo Ferrante, 2022. "Addressing fairness in artificial intelligence for medical imaging," Nature Communications, Nature, vol. 13(1), pages 1-6, December.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Emilio Ferrara, 2024. "GenAI against humanity: nefarious applications of generative artificial intelligence and large language models," Journal of Computational Social Science, Springer, vol. 7(1), pages 549-569, April.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pdig00:0000386. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: digitalhealth (email available below). General contact details of provider: https://journals.plos.org/digitalhealth .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.