IDEAS home Printed from https://ideas.repec.org/a/plo/pone00/0298037.html
   My bibliography  Save this article

Putting a human in the loop: Increasing uptake, but decreasing accuracy of automated decision-making

Author

Listed:
  • Daniela Sele
  • Marina Chugunova

Abstract

Automated decision-making gains traction, prompting discussions on regulation with calls for human oversight. Understanding how human involvement affects the acceptance of algorithmic recommendations and the accuracy of resulting decisions is vital. In an online experiment (N = 292), for a prediction task, participants choose a recommendation stemming either from an algorithm or another participant. In a between-subject design, we varied if the prediction was delegated completely or if the recommendation could be adjusted. 66% of times, participants preferred to delegate the decision to an algorithm over an equally accurate human. The preference for an algorithm increased by 7 percentage points if participants could monitor and adjust the recommendations. Participants followed algorithmic recommendations more closely. Importantly, they were less likely to intervene with the least accurate recommendations. Hence, in our experiment the human-in-the-loop design increases the uptake but decreases the accuracy of the decisions.

Suggested Citation

  • Daniela Sele & Marina Chugunova, 2024. "Putting a human in the loop: Increasing uptake, but decreasing accuracy of automated decision-making," PLOS ONE, Public Library of Science, vol. 19(2), pages 1-14, February.
  • Handle: RePEc:plo:pone00:0298037
    DOI: 10.1371/journal.pone.0298037
    as

    Download full text from publisher

    File URL: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0298037
    Download Restriction: no

    File URL: https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0298037&type=printable
    Download Restriction: no

    File URL: https://libkey.io/10.1371/journal.pone.0298037?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Edwards, Lilian & Veale, Michael, 2017. "Slave to the Algorithm? Why a 'right to an explanation' is probably not the remedy you are looking for," LawRxiv 97upg, Center for Open Science.
    2. Berkeley J. Dietvorst & Joseph P. Simmons & Cade Massey, 2018. "Overcoming Algorithm Aversion: People Will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them," Management Science, INFORMS, vol. 64(3), pages 1155-1170, March.
    3. Chugunova, Marina & Sele, Daniela, 2022. "We and It: An interdisciplinary review of the experimental evidence on how humans interact with machines," Journal of Behavioral and Experimental Economics (formerly The Journal of Socio-Economics), Elsevier, vol. 99(C).
    4. Edwards, Lilian & Veale, Michael, 2017. "Slave to the Algorithm? Why a 'right to an explanation' is probably not the remedy you are looking for," LawArchive 97upg_v1, Center for Open Science.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Daniela Sele & Marina Chugunova, 2023. "Putting a Human in the Loop: Increasing Uptake, but Decreasing Accuracy of Automated Decision-Making," Rationality and Competition Discussion Paper Series 438, CRC TRR 190 Rationality and Competition.
    2. van de Kerkhof, Jacob, 2025. "Article 22 Digital Services Act: Building trust with trusted flaggers," Internet Policy Review: Journal on Internet Regulation, Alexander von Humboldt Institute for Internet and Society (HIIG), Berlin, vol. 14(1), pages 1-26.
    3. Ivanova-Stenzel, Radosveta & Tolksdorf, Michel, 2024. "Measuring preferences for algorithms — How willing are people to cede control to algorithms?," Journal of Behavioral and Experimental Economics (formerly The Journal of Socio-Economics), Elsevier, vol. 112(C).
    4. Gorny, Paul M. & Groos, Eva & Strobel, Christina, 2024. "Do Personalized AI Predictions Change Subsequent Decision-Outcomes? The Impact of Human Oversight," MPRA Paper 121065, University Library of Munich, Germany.
    5. Duan Bo & Aini Azeqa Marof & Zeinab Zaremohzzabieh, 2025. "The Influence of Negative Stereotypes in Science Fiction and Fantasy on Public Perceptions of Artificial Intelligence: A Systematic Review," Studies in Media and Communication, Redfame publishing, vol. 13(1), pages 180-190, March.
    6. Vomberg, Arnd & Schauerte, Nico & Krakowski, Sebastian & Ingram Bogusz, Claire & Gijsenberg, Maarten J. & Bleier, Alexander, 2023. "The cold-start problem in nascent AI strategy: Kickstarting data network effects," Journal of Business Research, Elsevier, vol. 168(C).
    7. Michael Vössing & Niklas Kühl & Matteo Lind & Gerhard Satzger, 2022. "Designing Transparency for Effective Human-AI Collaboration," Information Systems Frontiers, Springer, vol. 24(3), pages 877-895, June.
    8. Dimitris Bertsimas & Agni Orfanoudaki, 2021. "Algorithmic Insurance," Papers 2106.00839, arXiv.org, revised Dec 2022.
    9. Mahmud, Hasan & Islam, A.K.M. Najmul & Ahmed, Syed Ishtiaque & Smolander, Kari, 2022. "What influences algorithmic decision-making? A systematic literature review on algorithm aversion," Technological Forecasting and Social Change, Elsevier, vol. 175(C).
    10. Bryce McLaughlin & Jann Spiess, 2022. "Algorithmic Assistance with Recommendation-Dependent Preferences," Papers 2208.07626, arXiv.org, revised Jan 2024.
    11. Doumpos, Michalis & Zopounidis, Constantin & Gounopoulos, Dimitrios & Platanakis, Emmanouil & Zhang, Wenke, 2023. "Operational research and artificial intelligence methods in banking," European Journal of Operational Research, Elsevier, vol. 306(1), pages 1-16.
    12. Markus Jung & Mischa Seiter, 2021. "Towards a better understanding on mitigating algorithm aversion in forecasting: an experimental study," Journal of Management Control: Zeitschrift für Planung und Unternehmenssteuerung, Springer, vol. 32(4), pages 495-516, December.
    13. Tse, Tiffany Tsz Kwan & Hanaki, Nobuyuki & Mao, Bolin, 2024. "Beware the performance of an algorithm before relying on it: Evidence from a stock price forecasting experiment," Journal of Economic Psychology, Elsevier, vol. 102(C).
    14. repec:osf:socarx:fbu27_v1 is not listed on IDEAS
    15. Fabian Dvorak & Regina Stumpf & Sebastian Fehrler & Urs Fischbacher, 2024. "Generative AI Triggers Welfare-Reducing Decisions in Humans," Papers 2401.12773, arXiv.org.
    16. Jan René Judek, 2024. "Willingness to Use Algorithms Varies with Social Information on Weak vs. Strong Adoption: An Experimental Study on Algorithm Aversion," FinTech, MDPI, vol. 3(1), pages 1-11, January.
    17. König, Pascal D. & Wenzelburger, Georg, 2021. "The legitimacy gap of algorithmic decision-making in the public sector: Why it arises and how to address it," Technology in Society, Elsevier, vol. 67(C).
    18. Kohei Kawaguchi, 2021. "When Will Workers Follow an Algorithm? A Field Experiment with a Retail Business," Management Science, INFORMS, vol. 67(3), pages 1670-1695, March.
    19. Francis de Véricourt & Huseyin Gurkan, 2022. "Is your machine better than you? You may never know," ESMT Research Working Papers ESMT-22-02, ESMT European School of Management and Technology.
    20. Zhu, Yimin & Zhang, Jiemin & Wu, Jifei & Liu, Yingyue, 2022. "AI is better when I'm sure: The influence of certainty of needs on consumers' acceptance of AI chatbots," Journal of Business Research, Elsevier, vol. 150(C), pages 642-652.
    21. Vasiliki Koniakou, 2023. "From the “rush to ethics” to the “race for governance” in Artificial Intelligence," Information Systems Frontiers, Springer, vol. 25(1), pages 71-102, February.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:plo:pone00:0298037. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: plosone (email available below). General contact details of provider: https://journals.plos.org/plosone/ .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.