Author
Abstract
The increasing integration of Information Systems (IS) based on Artificial Intelligence (AI) into diverse societal and organizational domains has made algorithmic accountability a critical concern in IS research and practice. As these systems assume greater roles in high-stakes decision-making, such as in healthcare, finance, and criminal justice, they raise pressing questions about ethics, governance, and, ultimately, algorithmic accountability. Algorithmic accountability aims to clarify who is obligated to justify the design, use, and outcomes of AI systems and who bears responsibility for their potential negative consequences. While policymakers, organizations, and the public emphasize the need for algorithmic accountability, much of the existing discourse has mainly remained conceptual, raising the question of how algorithmic accountability and perceptions of it materialize in practice and what concrete effects they have. Understanding these manifestations is crucial, particularly concerning AI developers, who directly shape AI design and whose accountability perceptions influence their development decisions. Against this backdrop, this dissertation examines how accountability triggers foster accountability perceptions among AI developers, how these perceptions manifest, and how they influence AI developers’ behavior in AI systems development. The findings reveal that direct indications, such as accountability arguments embedded in IS engineering tools, effectively evoke accountability perceptions among AI developers. However, these perceptions are not uniform but rather multifaceted, varying in intensity and reference points. While they often lead AI developers to favor more cautious designs of AI systems, unclear accountability attributions can negatively impact work-related affective states. These insights highlight the importance of designing algorithmic accountability mechanisms that trigger accountability perceptions and clarify their scope and implications, ensuring both responsible AI systems development and sustainable work environments for AI developers. This dissertation consists of four peer-reviewed articles (Article A–D) that address the socio-technical and behavioral dimensions of algorithmic accountability in AI systems development. The first part of this dissertation explores how organizations can trigger and shape accountability perceptions. Given the limitations of established governance mechanisms such as AI principles and AI audits, Article A introduces accountability arguments as embedded accountability triggers within IS engineering tools. Using a mixed-method research approach, the article demonstrates that AI developers differentiate between accountability perceptions related to development processes (process accountability) and those concerning the outcomes of AI systems (outcome accountability). The findings reveal that process accountability is more immediately perceived, while outcome accountability requires targeted interventions to be internalized equally effectively by AI developers. These insights advance IS research by conceptualizing accountability arguments as a dynamic governance mechanism that actively shapes AI developers’ accountability perceptions in AI systems development. The second part of this dissertation examines how different forms of accountability perceptions manifest among AI developers. Through an online survey, Article B highlights the consequences of incongruence in intrapersonal accountability perceptions, differentiating between self-attributed accountability and others-attributed accountability, referring to accountability assigned by others. The article demonstrates that misalignment between these perceptions increases role ambiguity and reduces job satisfaction, underscoring the need for clear and transparent algorithmic accountability communication within organizations. Through qualitative interviews, Article C further refines this understanding by distinguishing between two conceptualizations of algorithmic accountability: one as an intrinsic ethical virtue shaping AI developers’ decision-making and the other as an external governance mechanism ensuring compliance with organizational and regulatory standards. The findings reveal that AI developers’ ethical orientations influence whether they proactively integrate algorithmic accountability into their decision-making or adapt a more reactive, compliance-driven approach. This differentiation is essential for organizations seeking to cultivate a shared algorithmic accountability culture within AI systems development teams. The third part of this dissertation explores how accountability perceptions shape AI developers’ behavior, especially related to AI design. While prior IS research has predominantly focused on the effects of accountability perceptions on users’ behavior, Article D shifts the focus to AI developers as decision-makers by employing a scenario-based survey, revealing that heightened accountability perceptions lead to more cautious and risk-averse AI design preferences. AI developers who perceive strong accountability tend to reduce AI systems’ autonomy and inscrutability while prioritizing their learnability. This article advances IS research by demonstrating that algorithmic accountability is not only a governance mechanism but also a factor that actively shapes AI design. These findings call for organizations to carefully balance algorithmic accountability mandates with innovation goals, as excessive algorithmic accountability pressure may constrain exploratory design decisions. Taken together, the articles in this dissertation contribute to IS research by providing a more holistic understanding of how accountability triggers evoke accountability perceptions among AI developers, how these perceptions take shape in diverse and multifaceted ways, and how they ultimately influence AI systems development practices and decision-making. In doing so, this dissertation conceptualizes algorithmic accountability as a multi-layered construct, examining how AI developers internalize algorithmic accountability, how inconsistencies in accountability perceptions affect work-related affective states, and how these perceptions shape AI developers’ behavior. By differentiating between process and outcome accountability within AI systems development, self- and others-attributed accountability, and algorithmic accountability as a virtue versus a mechanism, this dissertation advances a more nuanced perspective on algorithmic accountability and its broader implications. These insights lay the groundwork for future IS research on algorithmic accountability as a dynamic and evolving governance mechanism within IS development practices. From a practical perspective, this dissertation offers valuable guidance for organizations and policymakers. For organizations, the findings suggest that integrating embedded algorithmic accountability interventions into development workflows can enhance clarity and consistency in algorithmic accountability communication, helping to minimize perceptual misalignment among AI developers. Rather than merely imposing mandates, effective algorithmic accountability frameworks must actively shape how algorithmic accountability is understood, internalized, and applied in practice, ensuring that AI developers engage with it as an embedded and actionable aspect of their work. For policymakers, this dissertation underscores that regulatory approaches must not only mandate algorithmic accountability but also consider how AI developers perceive and internalize these requirements. Ambiguously framed algorithmic accountability mandates risk creating unintended and potentially counterproductive consequences, as ambiguous understandings of algorithmic accountability may negatively impact AI developers’ ability to adhere to algorithmic accountability standards in practice. These findings call for closer collaboration between researchers, organizations, and policymakers to ensure that algorithmic accountability remains both theoretically sound and practically implementable. Future IS research should explore how accountability perceptions evolve over time, how interactions between AI stakeholders shape algorithmic accountability, and how algorithmic accountability mechanisms influence AI system adoption and long-term societal outcomes. Ultimately, this dissertation lays the groundwork for developing more effective governance strategies for AI systems, enabling organizations to proactively shape accountability perceptions, and ensuring that AI systems are not only technically advanced but also aligned with ethical and societal expectations.
Suggested Citation
Schmidt, Jan-Hendrik, 2025.
"Algorithmic Accountability: An Analysis of AI Developers' Perceptions and Behavioral Responses,"
Publications of Darmstadt Technical University, Institute for Business Studies (BWL)
155811, Darmstadt Technical University, Department of Business Administration, Economics and Law, Institute for Business Studies (BWL).
Handle:
RePEc:dar:wpaper:155811
Note: for complete metadata visit http://tubiblio.ulb.tu-darmstadt.de/155811/
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:dar:wpaper:155811. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Dekanatssekretariat (email available below). General contact details of provider: https://edirc.repec.org/data/ivthdde.html .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.