IDEAS home Printed from https://ideas.repec.org/a/inm/ormsom/v24y2022i6p3039-3059.html
   My bibliography  Save this article

Antidiscrimination Laws, Artificial Intelligence, and Gender Bias: A Case Study in Nonmortgage Fintech Lending

Author

Listed:
  • Stephanie Kelley

    (Smith School of Business, Queen’s University, Kingston, Ontario K7L 3N6, Canada)

  • Anton Ovchinnikov

    (Smith School of Business, Queen’s University, Kingston, Ontario K7L 3N6, Canada; INSEAD, 77300 Fontainebleau, France)

  • David R. Hardoon

    (Artificial Intelligence and Innovation Center of Excellence, Union Bank of the Philippines, and Aboitiz Data Innovation, Pasig City 1605, Philippines)

  • Adrienne Heinrich

    (Artificial Intelligence and Innovation Center of Excellence, Union Bank of the Philippines, and Aboitiz Data Innovation, Pasig City 1605, Philippines)

Abstract

Problem definition : We use a realistically large, publicly available data set from a global fintech lender to simulate the impact of different antidiscrimination laws and their corresponding data management and model-building regimes on gender-based discrimination in the nonmortgage fintech lending setting. Academic/practical relevance : Our paper extends the conceptual understanding of model-based discrimination from computer science to a realistic context that simulates the situations faced by fintech lenders in practice, where advanced machine learning (ML) techniques are used with high-dimensional, feature-rich, highly multicollinear data. We provide technically and legally permissible approaches for firms to reduce discrimination across different antidiscrimination regimes whilst managing profitability. Methodology : We train statistical and ML models on a large and realistically rich publicly available data set to simulate different antidiscrimination regimes and measure their impact on model quality and firm profitability. We use ML explainability techniques to understand the drivers of ML discrimination. Results : We find that regimes that prohibit the use of gender (like those in the United States) substantially increase discrimination and slightly decrease firm profitability. We observe that ML models are less discriminatory, of better predictive quality, and more profitable compared with traditional statistical models like logistic regression. Unlike omitted variable bias—which drives discrimination in statistical models—ML discrimination is driven by changes in the model training procedure, including feature engineering and feature selection, when gender is excluded. We observe that down sampling the training data to rebalance gender, gender-aware hyperparameter selection, and up sampling the training data to rebalance gender all reduce discrimination, with varying trade-offs in predictive quality and firm profitability. Probabilistic gender proxy modeling (imputing applicant gender) further reduces discrimination with negligible impact on predictive quality and a slight increase in firm profitability. Managerial implications : A rethink is required of the antidiscrimination laws, specifically with respect to the collection and use of protected attributes for ML models. Firms should be able to collect protected attributes to, at minimum, measure discrimination and ideally, take steps to reduce it. Increased data access should come with greater accountability for firms.

Suggested Citation

  • Stephanie Kelley & Anton Ovchinnikov & David R. Hardoon & Adrienne Heinrich, 2022. "Antidiscrimination Laws, Artificial Intelligence, and Gender Bias: A Case Study in Nonmortgage Fintech Lending," Manufacturing & Service Operations Management, INFORMS, vol. 24(6), pages 3039-3059, November.
  • Handle: RePEc:inm:ormsom:v:24:y:2022:i:6:p:3039-3059
    DOI: 10.1287/msom.2022.1108
    as

    Download full text from publisher

    File URL: http://dx.doi.org/10.1287/msom.2022.1108
    Download Restriction: no

    File URL: https://libkey.io/10.1287/msom.2022.1108?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:inm:ormsom:v:24:y:2022:i:6:p:3039-3059. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Chris Asher (email available below). General contact details of provider: https://edirc.repec.org/data/inforea.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.