IDEAS home Printed from https://ideas.repec.org/a/jis/ejistu/y2023i02id526.html
   My bibliography  Save this article

Awareness of Unethical Artificial Intelligence and its Mitigation Measures

Author

Listed:
  • BERNSTEINER Reinhard
  • PLODER Christian
  • SPIESS Teresa
  • DILGER Thomas
  • HÖLLER Sonja

Abstract

Normal 0 21 false false false EN-GB X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; text-align:center; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri",sans-serif; mso-bidi-font-family:"Times New Roman"; mso-ansi-language:EN-US; mso-fareast-language:EN-US;} The infrastructure of the Internet is based on algorithms that enable the use of search engines, social networks, and much more. Algorithms themselves may vary in functionality, but many of them have the potential to reinforce, accentuate, and systematize age-old prejudices, biases, and implicit assumptions of society. Awareness of algorithms thus becomes an issue of agency, public life, and democracy. Nonetheless, as research showed, people are lacking algorithm awareness. Therefore, this paper aims to investigate the extent to which people are aware of unethical artificial intelligence and what actions they can take against it (mitigation measures). A survey addressing these factors yielded 291 valid responses. To examine the data and the relationship between the constructs in the model, partial least square structural modeling (PLS-SEM) was applied using the Smart PLS 3 tool. The empirical results demonstrate that awareness of mitigation measures is influenced by the self-efficacy of the user. However, trust in the algorithmic platform has no significant influence. In addition, the explainability of an algorithmic platform has a significant influence on the user's self-efficacy and should therefore be considered when setting up the platform. The most frequently mentioned mitigation measures by survey participants are laws and regulations, various types of algorithm audits, and education and training. This work thus provides new empirical insights for researchers and practitioners in the field of ethical artificial intelligence.

Suggested Citation

  • BERNSTEINER Reinhard & PLODER Christian & SPIESS Teresa & DILGER Thomas & HÖLLER Sonja, 2023. "Awareness of Unethical Artificial Intelligence and its Mitigation Measures," European Journal of Interdisciplinary Studies, Bucharest Economic Academy, issue 02, June.
  • Handle: RePEc:jis:ejistu:y:2023:i:02:id:526
    as

    Download full text from publisher

    File URL: https://ejist.ro/files/pdf/526.pdf
    Download Restriction: no

    File URL: https://ejist.ro/abstract/526/Awareness-of-Unethical-Artificial-Intelligence-and-its-Mitigation-Measures.html
    Download Restriction: no
    ---><---

    References listed on IDEAS

    as
    1. Qian Hu & Yaobin Lu & Zhao Pan & Yeming Gong & Zhiling Yang, 2021. "Can AI artifacts influence human cognition? : The effects of artificial autonomy in intelligent personal assistants," Post-Print hal-03188233, HAL.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Yi Sun & Shihui Li & Lingling Yu, 2022. "The dark sides of AI personal assistant: effects of service failure on user continuance intention," Electronic Markets, Springer;IIM University of St. Gallen, vol. 32(1), pages 17-39, March.
    2. Wu, Min & Wang, Nanxi & Yuen, Kum Fai, 2023. "Can autonomy level and anthropomorphic characteristics affect public acceptance and trust towards shared autonomous vehicles?," Technological Forecasting and Social Change, Elsevier, vol. 189(C).
    3. Hu, Qian & Pan, Zhao, 2023. "Can AI benefit individual resilience? The mediation roles of AI routinization and infusion," Journal of Retailing and Consumer Services, Elsevier, vol. 73(C).
    4. Guo, Wenshan & Luo, Qiangqiang, 2023. "Investigating the impact of intelligent personal assistants on the purchase intentions of Generation Z consumers: The moderating role of brand credibility," Journal of Retailing and Consumer Services, Elsevier, vol. 73(C).
    5. Chatterjee, Sheshadri & Rana, Nripendra P. & Dwivedi, Yogesh K. & Baabdullah, Abdullah M., 2021. "Understanding AI adoption in manufacturing and production firms using an integrated TAM-TOE model," Technological Forecasting and Social Change, Elsevier, vol. 170(C).
    6. Di Vaio, Assunta & Hassan, Rohail & Alavoine, Claude, 2022. "Data intelligence and analytics: A bibliometric analysis of human–Artificial intelligence in public sector decision-making effectiveness," Technological Forecasting and Social Change, Elsevier, vol. 174(C).
    7. Jain, Shilpi & Basu, Sriparna & Dwivedi, Yogesh K & Kaur, Sumeet, 2022. "Interactive voice assistants – Does brand credibility assuage privacy risks?," Journal of Business Research, Elsevier, vol. 139(C), pages 701-717.
    8. Xu, Ying & Niu, Nan & Zhao, Zixiang, 2023. "Dissecting the mixed effects of human-customer service chatbot interaction on customer satisfaction: An explanation from temporal and conversational cues," Journal of Retailing and Consumer Services, Elsevier, vol. 74(C).
    9. Kang, Weiyao & Shao, Bingjia, 2023. "The impact of voice assistants’ intelligent attributes on consumer well-being: Findings from PLS-SEM and fsQCA," Journal of Retailing and Consumer Services, Elsevier, vol. 70(C).
    10. Gao, Wei & Jiang, Ning & Guo, Qingqing, 2023. "How do virtual streamers affect purchase intention in the live streaming context? A presence perspective," Journal of Retailing and Consumer Services, Elsevier, vol. 73(C).

    More about this item

    Keywords

    artificial intelligence; biased artificial intelligence; algorithmic fairness; IT-audit; ethical AI;
    All these keywords.

    JEL classification:

    • C30 - Mathematical and Quantitative Methods - - Multiple or Simultaneous Equation Models; Multiple Variables - - - General
    • D83 - Microeconomics - - Information, Knowledge, and Uncertainty - - - Search; Learning; Information and Knowledge; Communication; Belief; Unawareness
    • M00 - Business Administration and Business Economics; Marketing; Accounting; Personnel Economics - - General - - - General

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:jis:ejistu:y:2023:i:02:id:526. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Alina Popescu (email available below). General contact details of provider: https://edirc.repec.org/data/frasero.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.