IDEAS home Printed from https://ideas.repec.org/p/rco/dpaper/438.html
   My bibliography  Save this paper

Putting a Human in the Loop: Increasing Uptake, but Decreasing Accuracy of Automated Decision-Making

Author

Listed:
  • Daniela Sele

    (ETH)

  • Marina Chugunova

    (Max Planck Institute for Innovation and Competition)

Abstract

Are people algorithm averse, as some previous literature indicates? If so, can the retention of human oversight increase the uptake of algorithmic recommendations, and does keeping a human in the loop improve accuracy? Answers to these questions are of utmost importance given the fast-growing availability of algorithmic recommendations and current intense discussions about regulation of automated decision-making. In an online experiment, we find that 66% of participants prefer algorithmic to equally accurate human recommendations if the decision is delegated fully. This preference for algorithms increases by further 7 percentage points if participants are able to monitor and adjust the recommendations before the decision is made. In line with automation bias, participants adjust the recommendations that stem from an algorithm by less than those from another human. Importantly, participants are less likely to intervene with the least accurate recommendations and adjust them by less, raising concerns about the monitoring ability of a human in a Human-in-the-Loop system. Our results document a trade-off: while allowing people to adjust algorithmic recommendations increases their uptake, the adjustments made by the human monitors reduce the quality of final decisions.

Suggested Citation

  • Daniela Sele & Marina Chugunova, 2023. "Putting a Human in the Loop: Increasing Uptake, but Decreasing Accuracy of Automated Decision-Making," Rationality and Competition Discussion Paper Series 438, CRC TRR 190 Rationality and Competition.
  • Handle: RePEc:rco:dpaper:438
    as

    Download full text from publisher

    File URL: https://rationality-and-competition.de/wp-content/uploads/discussion_paper/438.pdf
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Berkeley J. Dietvorst & Joseph P. Simmons & Cade Massey, 2018. "Overcoming Algorithm Aversion: People Will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them," Management Science, INFORMS, vol. 64(3), pages 1155-1170, March.
    2. Edwards, Lilian & Veale, Michael, 2017. "Slave to the Algorithm? Why a 'right to an explanation' is probably not the remedy you are looking for," LawArXiv 97upg, Center for Open Science.
    3. Jon Kleinberg & Himabindu Lakkaraju & Jure Leskovec & Jens Ludwig & Sendhil Mullainathan, 2018. "Human Decisions and Machine Predictions," The Quarterly Journal of Economics, President and Fellows of Harvard College, vol. 133(1), pages 237-293.
    4. Chugunova, Marina & Sele, Daniela, 2022. "We and It: An interdisciplinary review of the experimental evidence on how humans interact with machines," Journal of Behavioral and Experimental Economics (formerly The Journal of Socio-Economics), Elsevier, vol. 99(C).
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Gorny, Paul M. & Groos, Eva & Strobel, Christina, 2024. "Do Personalized AI Predictions Change Subsequent Decision-Outcomes? The Impact of Human Oversight," MPRA Paper 121065, University Library of Munich, Germany.
    2. Ekaterina Jussupow & Kai Spohrer & Armin Heinzl & Joshua Gawlitza, 2021. "Augmenting Medical Diagnosis Decisions? An Investigation into Physicians’ Decision-Making Process with Artificial Intelligence," Information Systems Research, INFORMS, vol. 32(3), pages 713-735, September.
    3. Kevin Bauer & Andrej Gill, 2024. "Mirror, Mirror on the Wall: Algorithmic Assessments, Transparency, and Self-Fulfilling Prophecies," Information Systems Research, INFORMS, vol. 35(1), pages 226-248, March.
    4. Talia Gillis & Bryce McLaughlin & Jann Spiess, 2021. "On the Fairness of Machine-Assisted Human Decisions," Papers 2110.15310, arXiv.org, revised Sep 2023.
    5. Said Kaawach & Oskar Kowalewski & Oleksandr Talavera, 2023. "Automatic vs Manual Investing: Role of Past Performance," Discussion Papers 23-04, Department of Economics, University of Birmingham.
    6. Vomberg, Arnd & Schauerte, Nico & Krakowski, Sebastian & Ingram Bogusz, Claire & Gijsenberg, Maarten J. & Bleier, Alexander, 2023. "The cold-start problem in nascent AI strategy: Kickstarting data network effects," Journal of Business Research, Elsevier, vol. 168(C).
    7. Maria De‐Arteaga & Stefan Feuerriegel & Maytal Saar‐Tsechansky, 2022. "Algorithmic fairness in business analytics: Directions for research and practice," Production and Operations Management, Production and Operations Management Society, vol. 31(10), pages 3749-3770, October.
    8. Marie-Pierre Dargnies & Rustamdjan Hakimov & Dorothea Kübler, 2022. "Aversion to Hiring Algorithms: Transparency, Gender Profiling, and Self-Confidence," CESifo Working Paper Series 9968, CESifo.
    9. Bauer, Kevin & von Zahn, Moritz & Hinz, Oliver, 2022. "Expl(AI)ned: The impact of explainable Artificial Intelligence on cognitive processes," SAFE Working Paper Series 315, Leibniz Institute for Financial Research SAFE, revised 2022.
    10. Scott Schanke & Gordon Burtch & Gautam Ray, 2021. "Estimating the Impact of “Humanizing” Customer Service Chatbots," Information Systems Research, INFORMS, vol. 32(3), pages 736-751, September.
    11. Keding, Christoph & Meissner, Philip, 2021. "Managerial overreliance on AI-augmented decision-making processes: How the use of AI-based advisory systems shapes choice behavior in R&D investment decisions," Technological Forecasting and Social Change, Elsevier, vol. 171(C).
    12. Chugunova, Marina & Sele, Daniela, 2022. "We and It: An interdisciplinary review of the experimental evidence on how humans interact with machines," Journal of Behavioral and Experimental Economics (formerly The Journal of Socio-Economics), Elsevier, vol. 99(C).
    13. Saravanan Kesavan & Tarun Kushwaha, 2020. "Field Experiment on the Profit Implications of Merchants’ Discretionary Power to Override Data-Driven Decision-Making Tools," Management Science, INFORMS, vol. 66(11), pages 5182-5190, November.
    14. Fumagalli, Elena & Rezaei, Sarah & Salomons, Anna, 2022. "OK computer: Worker perceptions of algorithmic recruitment," Research Policy, Elsevier, vol. 51(2).
    15. Bansak, Kirk & Paulson, Elisabeth, 2023. "Public Opinion on Fairness and Efficiency for Algorithmic and Human Decision-Makers," OSF Preprints pghmx, Center for Open Science.
    16. Dionissi Aliprantis & Hal Martin & Kristen Tauber, 2020. "What Determines the Success of Housing Mobility Programs?," Working Papers 20-36R, Federal Reserve Bank of Cleveland, revised 19 Oct 2022.
    17. Daníelsson, Jón & Macrae, Robert & Uthemann, Andreas, 2022. "Artificial intelligence and systemic risk," Journal of Banking & Finance, Elsevier, vol. 140(C).
    18. Yucheng Yang & Zhong Zheng & Weinan E, 2020. "Interpretable Neural Networks for Panel Data Analysis in Economics," Papers 2010.05311, arXiv.org, revised Nov 2020.
    19. Daniel Carter & Amelia Acker & Dan Sholler, 2021. "Investigative approaches to researching information technology companies," Journal of the Association for Information Science & Technology, Association for Information Science & Technology, vol. 72(6), pages 655-666, June.
    20. Dimitris Bertsimas & Agni Orfanoudaki, 2021. "Algorithmic Insurance," Papers 2106.00839, arXiv.org, revised Dec 2022.

    More about this item

    Keywords

    automated decision-making; algorithm aversion; algorithm appreciation; automation bias;
    All these keywords.

    JEL classification:

    • O33 - Economic Development, Innovation, Technological Change, and Growth - - Innovation; Research and Development; Technological Change; Intellectual Property Rights - - - Technological Change: Choices and Consequences; Diffusion Processes
    • C90 - Mathematical and Quantitative Methods - - Design of Experiments - - - General
    • D90 - Microeconomics - - Micro-Based Behavioral Economics - - - General

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:rco:dpaper:438. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Viviana Lalli (email available below). General contact details of provider: https://rationality-and-competition.de .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.