IDEAS home Printed from https://ideas.repec.org/p/osf/osfxxx/pghmx.html
   My bibliography  Save this paper

Public Opinion on Fairness and Efficiency for Algorithmic and Human Decision-Makers

Author

Listed:
  • Bansak, Kirk
  • Paulson, Elisabeth

Abstract

This study explores the public's preferences between algorithmic and human decision-makers (DMs) in high-stakes contexts, how these preferences are impacted by performance metrics, and whether the public's evaluation of performance differs when considering algorithmic versus human DMs. Leveraging a conjoint experimental design, respondents (n = 9,030) chose between pairs of DM profiles in two scenarios: pre-trial release decisions and bank loan decisions. DM profiles varied on the DM’s type (human v. algorithm) and on three metrics—defendant crime rate/loan default rate, false positive rate (FPR) among white defendants/applicants, and FPR among minority defendants/applicants—as well as an implicit fairness metric defined by the absolute difference between the two FPRs. Controlling for performance, we observe a general tendency to favor human DMs, though this is driven by a subset of respondents who expect human DMs to perform better in the real world. In addition, although a large portion of respondents claimed to prioritize fairness, we find that the impact of fairness on respondents' actual choices is limited. We also find that the relative importance of the four performance metrics remains consistent across DM type, suggesting that the public's preferences related to DM performance do not vary fundamentally between algorithmic and human DMs. Taken together, our analysis suggests that the public as a whole does not hold algorithmic DMs to a stricter fairness or efficiency standard, which has important implications as policymakers and technologists grapple with the integration of AI into pivotal societal functions.

Suggested Citation

  • Bansak, Kirk & Paulson, Elisabeth, 2023. "Public Opinion on Fairness and Efficiency for Algorithmic and Human Decision-Makers," OSF Preprints pghmx, Center for Open Science.
  • Handle: RePEc:osf:osfxxx:pghmx
    DOI: 10.31219/osf.io/pghmx
    as

    Download full text from publisher

    File URL: https://osf.io/download/6531bd2287852d0afda59372/
    Download Restriction: no

    File URL: https://libkey.io/10.31219/osf.io/pghmx?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Chiara Longoni & Andrea Bonezzi & Carey K Morewedge, 2019. "Resistance to Medical Artificial Intelligence," Journal of Consumer Research, Journal of Consumer Research Inc., vol. 46(4), pages 629-650.
    2. Berkeley J. Dietvorst & Joseph P. Simmons & Cade Massey, 2018. "Overcoming Algorithm Aversion: People Will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them," Management Science, INFORMS, vol. 64(3), pages 1155-1170, March.
    3. Alvarez, R. Michael & Atkeson, Lonna Rae & Levin, Ines & Li, Yimeng, 2019. "Paying Attention to Inattentive Survey Respondents," Political Analysis, Cambridge University Press, vol. 27(2), pages 145-162, April.
    4. Bansak, Kirk, 2019. "Can nonexperts really emulate statistical learning methods? A comment on “The accuracy, fairness, and limits of predicting recidivismâ€," Political Analysis, Cambridge University Press, vol. 27(3), pages 370-380, July.
    5. Richard Berk & Hoda Heidari & Shahin Jabbari & Michael Kearns & Aaron Roth, 2021. "Fairness in Criminal Justice Risk Assessments: The State of the Art," Sociological Methods & Research, , vol. 50(1), pages 3-44, February.
    6. Bansak, Kirk & Bechtel, Michael M. & Margalit, Yotam, 2021. "Why Austerity? The Mass Politics of a Contested Policy," American Political Science Review, Cambridge University Press, vol. 115(2), pages 486-505, May.
    7. Jussupow, Ekaterina & Benbasat, Izak & Heinzl, Armin, 2020. "Why Are We Averse Towards Algorithms? A Comprehensive Literature Review on Algorithm Aversion," Publications of Darmstadt Technical University, Institute for Business Studies (BWL) 138565, Darmstadt Technical University, Department of Business Administration, Economics and Law, Institute for Business Studies (BWL).
    8. Adam J. Berinsky & Michele F. Margolis & Michael W. Sances, 2014. "Separating the Shirkers from the Workers? Making Sure Respondents Pay Attention on Self‐Administered Surveys," American Journal of Political Science, John Wiley & Sons, vol. 58(3), pages 739-753, July.
    9. Jon Kleinberg & Himabindu Lakkaraju & Jure Leskovec & Jens Ludwig & Sendhil Mullainathan, 2018. "Human Decisions and Machine Predictions," The Quarterly Journal of Economics, President and Fellows of Harvard College, vol. 133(1), pages 237-293.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Ekaterina Jussupow & Kai Spohrer & Armin Heinzl & Joshua Gawlitza, 2021. "Augmenting Medical Diagnosis Decisions? An Investigation into Physicians’ Decision-Making Process with Artificial Intelligence," Information Systems Research, INFORMS, vol. 32(3), pages 713-735, September.
    2. Chugunova, Marina & Sele, Daniela, 2022. "We and It: An interdisciplinary review of the experimental evidence on how humans interact with machines," Journal of Behavioral and Experimental Economics (formerly The Journal of Socio-Economics), Elsevier, vol. 99(C).
    3. Vecchio, Riccardo & Caso, Gerarda & Cembalo, Luigi & Borrello, Massimiliano, 2020. "Is respondents’ inattention in online surveys a major issue for research?," Economia agro-alimentare / Food Economy, Italian Society of Agri-food Economics/Società Italiana di Economia Agro-Alimentare (SIEA), vol. 22(1), March.
    4. Maude Lavanchy & Patrick Reichert & Jayanth Narayanan & Krishna Savani, 2023. "Applicants’ Fairness Perceptions of Algorithm-Driven Hiring Procedures," Journal of Business Ethics, Springer, vol. 188(1), pages 125-150, November.
    5. Zhu, Yimin & Zhang, Jiemin & Wu, Jifei & Liu, Yingyue, 2022. "AI is better when I'm sure: The influence of certainty of needs on consumers' acceptance of AI chatbots," Journal of Business Research, Elsevier, vol. 150(C), pages 642-652.
    6. Benedikt Berger & Martin Adam & Alexander Rühr & Alexander Benlian, 2021. "Watch Me Improve—Algorithm Aversion and Demonstrating the Ability to Learn," Business & Information Systems Engineering: The International Journal of WIRTSCHAFTSINFORMATIK, Springer;Gesellschaft für Informatik e.V. (GI), vol. 63(1), pages 55-68, February.
    7. Jimin Pyo & Michael G. Maxfield, 2021. "Cognitive Effects of Inattentive Responding in an MTurk Sample," Social Science Quarterly, Southwestern Social Science Association, vol. 102(4), pages 2020-2039, July.
    8. repec:cup:judgdm:v:15:y:2020:i:3:p:449-451 is not listed on IDEAS
    9. Lingli Wang & Ni Huang & Yili Hong & Luning Liu & Xunhua Guo & Guoqing Chen, 2023. "Voice‐based AI in call center customer service: A natural field experiment," Production and Operations Management, Production and Operations Management Society, vol. 32(4), pages 1002-1018, April.
    10. Scott Schanke & Gordon Burtch & Gautam Ray, 2021. "Estimating the Impact of “Humanizing” Customer Service Chatbots," Information Systems Research, INFORMS, vol. 32(3), pages 736-751, September.
    11. Gregory Weitzner, 2024. "Reputational Algorithm Aversion," Papers 2402.15418, arXiv.org.
    12. Peng, Leiqing & Luo, Mengting & Guo, Yulang, 2023. "Deposit AI as the “invisible hand†to make the resale easier: A moderated mediation model," Journal of Retailing and Consumer Services, Elsevier, vol. 75(C).
    13. Keding, Christoph & Meissner, Philip, 2021. "Managerial overreliance on AI-augmented decision-making processes: How the use of AI-based advisory systems shapes choice behavior in R&D investment decisions," Technological Forecasting and Social Change, Elsevier, vol. 171(C).
    14. Gallego, Jorge & Rivero, Gonzalo & Martínez, Juan, 2021. "Preventing rather than punishing: An early warning model of malfeasance in public procurement," International Journal of Forecasting, Elsevier, vol. 37(1), pages 360-377.
    15. Mahmud, Hasan & Islam, A.K.M. Najmul & Mitra, Ranjan Kumar, 2023. "What drives managers towards algorithm aversion and how to overcome it? Mitigating the impact of innovation resistance through technology readiness," Technological Forecasting and Social Change, Elsevier, vol. 193(C).
    16. Dargnies, Marie-Pierre & Hakimov, Rustamdjan & Kübler, Dorothea, 2022. "Aversion to hiring algorithms: Transparency, gender profiling, and self-confidence," Discussion Papers, Research Unit: Market Behavior SP II 2022-202, WZB Berlin Social Science Center.
    17. Alabed, Amani & Javornik, Ana & Gregory-Smith, Diana, 2022. "AI anthropomorphism and its effect on users' self-congruence and self–AI integration: A theoretical framework and research agenda," Technological Forecasting and Social Change, Elsevier, vol. 182(C).
    18. Talia Gillis & Bryce McLaughlin & Jann Spiess, 2021. "On the Fairness of Machine-Assisted Human Decisions," Papers 2110.15310, arXiv.org, revised Sep 2023.
    19. Yoan Hermstrüwer & Pascal Langenbach, 2022. "Fair Governance with Humans and Machines," Discussion Paper Series of the Max Planck Institute for Research on Collective Goods 2022_04, Max Planck Institute for Research on Collective Goods, revised 01 Mar 2023.
    20. Daniela Sele & Marina Chugunova, 2023. "Putting a Human in the Loop: Increasing Uptake, but Decreasing Accuracy of Automated Decision-Making," Rationality and Competition Discussion Paper Series 438, CRC TRR 190 Rationality and Competition.
    21. Said Kaawach & Oskar Kowalewski & Oleksandr Talavera, 2023. "Automatic vs Manual Investing: Role of Past Performance," Discussion Papers 23-04, Department of Economics, University of Birmingham.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:osf:osfxxx:pghmx. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: OSF (email available below). General contact details of provider: https://osf.io/preprints/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.