Author
Abstract
In the past few years, there has been significant progress in the field of artificial intelligence (AI), with advancements in areas such as natural language processing and machine learning. AI systems are now being used in various industries and applications, from healthcare to finance, and are becoming more sophisticated and capable of handling complex tasks. The technology has the potential to assist in both private and professional decision-making. However, there are still challenges to be addressed, such as ensuring transparency and accountability in AI decision-making processes and addressing issues related to bias and ethics, and it is not yet certain whether all of these newly developed AI-based services will be accepted and used. This thesis addresses a research gap in the field of AI-based services by exploring the acceptance and utilization of such services from both individual and organizational perspectives. The research examines various factors that influence the acceptance of AI-based services and investigates users' perceptions of these services. The thesis poses four research questions, including identifying the differences in utilizing AI-based services compared to human-based services for decision-making, identifying characteristics of acceptance and utilization across different user groups, prioritizing methods for promoting trust in AI-based services, and exploring the impact of AI-based services on an organization's knowledge. To achieve this, the study employs various research methods such as surveys, experiments, interviews, and simulations within five research papers. Research focused on an organization that offers robo-advisors as an AI-based service, specifically a financial robo-advisor. This research paper measures advice-taking behavior in the interaction with robo-advisors based on the judge-advisor system and task-technology fit frameworks. The results show the advice of robo-advisors is followed more than that of human advisors and this behavior is reflected in the task-advisor fit. Interestingly, the advisor's perceived expertise is the most influential factor in the task-advisor fit for both robo-advisors and human advisors. However, integrity is only significant for human advisors, while the user's perception of the ability to make decisions efficiently is only significant for robo-advisors. Research paper B examined the differences in advice utilization between AI-based and human advisors and explored the relationship between task, advisor, and advice utilization using the task-advisor fit just like research paper A but in context the of a guessing game. The research paper analyzed the impact of advice similarity on utilization. The results indicated that judges tend to use advice from AI-based advisors more than human advisors when the advice is similar to their own estimation. When the advice is vastly different from their estimation, the utilization rate is equal for both AI-based and human advisors. Research paper C investigated the different needs of user groups in the context of health chatbots. The increasing number of aging individuals who require considerable medical attention could be addressed by health chatbots capable of identifying diseases based on symptoms. Existing chatbot applications are primarily used by younger generations. This research paper investigated the factors affecting the adoption of health chatbots by older people and the extended Unified Theory of Acceptance and Use of Technology. To investigate how to promote AI-based services such as robo-advisors, research paper D evaluated the effectiveness of eleven measures to increase trust in AI-based advisory systems and found that noncommittal testing was the most effective while implementing human traits had negligible effects. Additionally, the relative advantage of AI-based advising over that of human experts was measured in the context of financial planning. The results suggest that convenience is the most important advantage perceived by users. To analyze the impact of AI-based services on an organization's knowledge state, research paper E explored how organizations can effectively coordinate human and machine learning (ML). The results showed that ML can decrease an organization's need for humans’ explorative learning. The findings demonstrated that adjustments made by humans to ML systems are often beneficial but can become harmful under certain conditions. Additionally, relying on knowledge created by ML systems can facilitate organizational learning in turbulent environments, but it requires significant initial setup and coordination with humans. These findings offer new perspectives on organizational learning with ML and can guide organizations in optimizing resources for effective learning. In summary, the findings suggest that the acceptance and utilization of AI-based services can be influenced by the fit between the task and the service. However, organizations must carefully consider the user market and prioritize mechanisms to increase acceptance. Additionally, the implementation of AI-based services can positively affect an organization's ability to choose learning strategies or navigate turbulent environments, but it is crucial for humans to maintain domain knowledge of the task to reconfigure such services. This thesis enhances our understanding of the acceptance and utilization of AI-based services and provides valuable insights on how organizations can increase customers’ acceptance and usage of their AI-based services as well as implement and use AI-based services effectively.
Suggested Citation
Mesbah, Neda, 2023.
"Following the Robot – Investigating the Utilization and the Acceptance of AI-based Services,"
Publications of Darmstadt Technical University, Institute for Business Studies (BWL)
142023, Darmstadt Technical University, Department of Business Administration, Economics and Law, Institute for Business Studies (BWL).
Handle:
RePEc:dar:wpaper:142023
Note: for complete metadata visit http://tubiblio.ulb.tu-darmstadt.de/142023/
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:dar:wpaper:142023. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Dekanatssekretariat (email available below). General contact details of provider: https://edirc.repec.org/data/ivthdde.html .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.