IDEAS home Printed from https://ideas.repec.org/a/spr/elmark/v32y2022i4d10.1007_s12525-022-00605-4.html
   My bibliography  Save this article

Trust in artificial intelligence: From a Foundational Trust Framework to emerging research opportunities

Author

Listed:
  • Roman Lukyanenko

    (University of Virginia)

  • Wolfgang Maass

    (Saarland University and German Research Center for Artificial Intelligence (DFKI))

  • Veda C. Storey

    (Georgia State University)

Abstract

With the rise of artificial intelligence (AI), the issue of trust in AI emerges as a paramount societal concern. Despite increased attention of researchers, the topic remains fragmented without a common conceptual and theoretical foundation. To facilitate systematic research on this topic, we develop a Foundational Trust Framework to provide a conceptual, theoretical, and methodological foundation for trust research in general. The framework positions trust in general and trust in AI specifically as a problem of interaction among systems and applies systems thinking and general systems theory to trust and trust in AI. The Foundational Trust Framework is then used to gain a deeper understanding of the nature of trust in AI. From doing so, a research agenda emerges that proposes significant questions to facilitate further advances in empirical, theoretical, and design research on trust in AI.

Suggested Citation

  • Roman Lukyanenko & Wolfgang Maass & Veda C. Storey, 2022. "Trust in artificial intelligence: From a Foundational Trust Framework to emerging research opportunities," Electronic Markets, Springer;IIM University of St. Gallen, vol. 32(4), pages 1993-2020, December.
  • Handle: RePEc:spr:elmark:v:32:y:2022:i:4:d:10.1007_s12525-022-00605-4
    DOI: 10.1007/s12525-022-00605-4
    as

    Download full text from publisher

    File URL: http://link.springer.com/10.1007/s12525-022-00605-4
    File Function: Abstract
    Download Restriction: Access to the full text of the articles in this series is restricted.

    File URL: https://libkey.io/10.1007/s12525-022-00605-4?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    References listed on IDEAS

    as
    1. Sirkka L. Jarvenpaa & Dorothy E. Leidner, 1999. "Communication and Trust in Global Virtual Teams," Organization Science, INFORMS, vol. 10(6), pages 791-815, December.
    2. François Bodart & Arvind Patel & Marc Sim & Ron Weber, 2001. "Should Optional Properties Be Used in Conceptual Modelling? A Theory and Three Empirical Tests," Information Systems Research, INFORMS, vol. 12(4), pages 384-405, December.
    3. Schniter, E. & Shields, T.W. & Sznycer, D., 2020. "Trust in humans and robots: Economically similar but emotionally different," Journal of Economic Psychology, Elsevier, vol. 78(C).
    4. René Riedl, 2022. "Is trust in artificial intelligence systems related to user personality? Review of empirical evidence and future research directions," Electronic Markets, Springer;IIM University of St. Gallen, vol. 32(4), pages 2021-2051, December.
    5. Frey, Carl Benedikt & Osborne, Michael A., 2017. "The future of employment: How susceptible are jobs to computerisation?," Technological Forecasting and Social Change, Elsevier, vol. 114(C), pages 254-280.
    6. Frens Kroeger, 2019. "Unlocking the treasure trove: How can Luhmann’s theory of trust enrich trust research?," Journal of Trust Research, Taylor & Francis Journals, vol. 9(1), pages 110-124, January.
    7. DonHee Lee & Seong No Yoon, 2021. "Application of Artificial Intelligence-Based Technologies in the Healthcare Industry: Opportunities and Challenges," IJERPH, MDPI, vol. 18(1), pages 1-18, January.
    8. Rongbin Yang & Santoso Wibowo, 2022. "User trust in artificial intelligence: A comprehensive conceptual framework," Electronic Markets, Springer;IIM University of St. Gallen, vol. 32(4), pages 2053-2077, December.
    9. Scott Thiebes & Sebastian Lins & Ali Sunyaev, 2021. "Trustworthy artificial intelligence," Electronic Markets, Springer;IIM University of St. Gallen, vol. 31(2), pages 447-464, June.
    10. Michael Haenlein & Ming-Hui Huang & Andreas Kaplan, 2022. "Guest Editorial: Business Ethics in the Era of Artificial Intelligence," Journal of Business Ethics, Springer, vol. 178(4), pages 867-869, July.
    11. Arun Rai, 2020. "Explainable AI: from black box to glass box," Journal of the Academy of Marketing Science, Springer, vol. 48(1), pages 137-141, January.
    12. Vijay Khatri & Iris Vessey & V. Ramesh & Paul Clay & Sung-Jin Park, 2006. "Understanding Conceptual Schemas: Exploring the Role of Application and IS Domain Knowledge," Information Systems Research, INFORMS, vol. 17(1), pages 81-99, March.
    13. Jonas Wanner & Lukas-Valentin Herm & Kai Heinrich & Christian Janiesch, 2022. "The effect of transparency and trust on intelligent system acceptance: Evidence from a user-based study," Electronic Markets, Springer;IIM University of St. Gallen, vol. 32(4), pages 2079-2102, December.
    14. Jia, Kai & Kenney, Martin & Mattila, Juri & Seppälä, Timo, 2018. "The Application of Artificial Intelligence at Chinese Digital Platform Giants: Baidu, Alibaba and Tencent," ETLA Reports 81, The Research Institute of the Finnish Economy.
    15. Yair Wand & Ron Weber, 2002. "Research Commentary: Information Systems and Conceptual Modeling—A Research Agenda," Information Systems Research, INFORMS, vol. 13(4), pages 363-376, December.
    16. Kate Crawford & Ryan Calo, 2016. "There is a blind spot in AI research," Nature, Nature, vol. 538(7625), pages 311-313, October.
    17. Fabrice Lumineau & Oliver Schilke, 2018. "Trust development across levels of analysis: An embedded-agency perspective," Journal of Trust Research, Taylor & Francis Journals, vol. 8(2), pages 238-248, July.
    18. Wan, Yinglin & Gao, Yuchen & Hu, Yimei, 2022. "Blockchain application and collaborative innovation in the manufacturing industry: Based on the perspective of social trust," Technological Forecasting and Social Change, Elsevier, vol. 177(C).
    19. Davide Castelvecchi, 2016. "Can we open the black box of AI?," Nature, Nature, vol. 538(7623), pages 20-23, October.
    20. Boero, Riccardo & Bravo, Giangiacomo & Castellani, Marco & Squazzoni, Flaminio, 2009. "Reputational cues in repeated trust games," Journal of Behavioral and Experimental Economics (formerly The Journal of Socio-Economics), Elsevier, vol. 38(6), pages 871-877, December.
    Full references (including those not matched with items on IDEAS)

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Rainer Alt, 2022. "Electronic Markets on AI and standardization," Electronic Markets, Springer;IIM University of St. Gallen, vol. 32(4), pages 1795-1805, December.

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Jan Mendling & Jan Recker & Hajo A. Reijers & Henrik Leopold, 2019. "An Empirical Review of the Connection Between Model Viewer Characteristics and the Comprehension of Conceptual Process Models," Information Systems Frontiers, Springer, vol. 21(5), pages 1111-1135, October.
    2. Andrew Burton-Jones & Peter N. Meso, 2006. "Conceptualizing Systems for Understanding: An Empirical Test of Decomposition Principles in Object-Oriented Analysis," Information Systems Research, INFORMS, vol. 17(1), pages 38-60, March.
    3. Nils Köbis & Jean-François Bonnefon & Iyad Rahwan, 2021. "Bad machines corrupt good morals," Nature Human Behaviour, Nature, vol. 5(6), pages 679-685, June.
    4. Sofianos, Andis, 2022. "Self-reported & revealed trust: Experimental evidence," Journal of Economic Psychology, Elsevier, vol. 88(C).
    5. Fábio Duarte & Ricardo Álvarez, 2019. "The data politics of the urban age," Palgrave Communications, Palgrave Macmillan, vol. 5(1), pages 1-7, December.
    6. Palash Bera, 2021. "Interactions between Analysts in Developing Collaborative Conceptual Models," Information Systems Frontiers, Springer, vol. 23(3), pages 561-573, June.
    7. Hemant Jain & Balaji Padmanabhan & Paul A. Pavlou & T. S. Raghu, 2021. "Editorial for the Special Section on Humans, Algorithms, and Augmented Intelligence: The Future of Work, Organizations, and Society," Information Systems Research, INFORMS, vol. 32(3), pages 675-687, September.
    8. Eric Overby, 2008. "Process Virtualization Theory and the Impact of Information Technology," Organization Science, INFORMS, vol. 19(2), pages 277-291, April.
    9. Naudé, Wim & Dimitri, Nicola, 2021. "Public Procurement and Innovation for Human-Centered Artificial Intelligence," IZA Discussion Papers 14021, Institute of Labor Economics (IZA).
    10. Piotr Tomasz Makowski & Yuya Kajikawa, 2021. "Automation-driven innovation management? Toward Innovation-Automation-Strategy cycle," Papers 2103.02395, arXiv.org.
    11. Manav Raj & Robert Seamans, 2019. "Primer on artificial intelligence and robotics," Journal of Organization Design, Springer;Organizational Design Community, vol. 8(1), pages 1-14, December.
    12. Rongbin Yang & Santoso Wibowo, 2022. "User trust in artificial intelligence: A comprehensive conceptual framework," Electronic Markets, Springer;IIM University of St. Gallen, vol. 32(4), pages 2053-2077, December.
    13. Ming-Hui Huang & Roland T. Rust, 2021. "A strategic framework for artificial intelligence in marketing," Journal of the Academy of Marketing Science, Springer, vol. 49(1), pages 30-50, January.
    14. Makowski, Piotr Tomasz & Kajikawa, Yuya, 2021. "Automation-driven innovation management? Toward Innovation-Automation-Strategy cycle," Technological Forecasting and Social Change, Elsevier, vol. 168(C).
    15. Palash Bera & Andrew Burton-Jones & Yair Wand, 2014. "Research Note ---How Semantics and Pragmatics Interact in Understanding Conceptual Models," Information Systems Research, INFORMS, vol. 25(2), pages 401-419, June.
    16. Pascal Hamm & Michael Klesel & Patricia Coberger & H. Felix Wittmann, 2023. "Explanation matters: An experimental study on explainable AI," Electronic Markets, Springer;IIM University of St. Gallen, vol. 33(1), pages 1-21, December.
    17. Lukas-Valentin Herm & Theresa Steinbach & Jonas Wanner & Christian Janiesch, 2022. "A nascent design theory for explainable intelligent systems," Electronic Markets, Springer;IIM University of St. Gallen, vol. 32(4), pages 2185-2205, December.
    18. Guan, Jian & Levitan, Alan S. & Kuhn, John R., 2013. "How AIS can progress along with ontology research in IS," International Journal of Accounting Information Systems, Elsevier, vol. 14(1), pages 21-38.
    19. Jana Gerlach & Paul Hoppe & Sarah Jagels & Luisa Licker & Michael H. Breitner, 2022. "Decision support for efficient XAI services - A morphological analysis, business model archetypes, and a decision tree," Electronic Markets, Springer;IIM University of St. Gallen, vol. 32(4), pages 2139-2158, December.
    20. Loebbing, Jonas, 2018. "An Elementary Theory of Endogenous Technical Change and Wage Inequality," VfS Annual Conference 2018 (Freiburg, Breisgau): Digital Economy 181603, Verein für Socialpolitik / German Economic Association.

    More about this item

    Keywords

    Artificial intelligence (AI); Trust; Foundational Trust Framework; Trust in AI; Explainable AI; Transparency; Systems;
    All these keywords.

    JEL classification:

    • L63 - Industrial Organization - - Industry Studies: Manufacturing - - - Microelectronics; Computers; Communications Equipment
    • L64 - Industrial Organization - - Industry Studies: Manufacturing - - - Other Machinery; Business Equipment; Armaments
    • L86 - Industrial Organization - - Industry Studies: Services - - - Information and Internet Services; Computer Software
    • C80 - Mathematical and Quantitative Methods - - Data Collection and Data Estimation Methodology; Computer Programs - - - General
    • D11 - Microeconomics - - Household Behavior - - - Consumer Economics: Theory
    • C71 - Mathematical and Quantitative Methods - - Game Theory and Bargaining Theory - - - Cooperative Games
    • C72 - Mathematical and Quantitative Methods - - Game Theory and Bargaining Theory - - - Noncooperative Games
    • C73 - Mathematical and Quantitative Methods - - Game Theory and Bargaining Theory - - - Stochastic and Dynamic Games; Evolutionary Games
    • J00 - Labor and Demographic Economics - - General - - - General

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:elmark:v:32:y:2022:i:4:d:10.1007_s12525-022-00605-4. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.