Author
Listed:
- Atul Gupta
(Government of Canada)
- Dinesh Verma
(IBM)
- Utpal Mangla
(IBM)
Abstract
The five key categories of ethical considerations in AI systems focused on fairness and bias, trust and transparency, privacy and security, accountability, and social benefits. This study proposes a framework for resolving Accuracy-Fairness trade-offs in AI use cases, leveraging Multi-Criteria Decision Making (MCDM) techniques. The Decision Making Trial and Evaluation Laboratory (DEMATEL) method and the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) understand complex relationships among ethical considerations and identify ideal solutions. AI models can improve societal equality, but so is the risk of bias leading to inequality. The study proposes a framework to analyze the trade-off between the cost of creating an unbiased AI model and the delay in societal benefits. This involves modeling societal benefits as an exponentially decaying function and inequality using a value distribution model. Several inequality measures are defined, including the Gini Index, the 20:20 index, and the Palma Ratio. A method is proposed to model social value using parameters that define the value models and the opportunity cost of getting a fair model. It also suggests modeling and analyzing indices of unfairness to determine the combinations for gain/loss of the unfairness index. The study further proposes to use a temporal model for societal benefits, where the total value delivered by a technology at time “t” is represented as an exponentially decaying function. Three types of values are defined: value generated without AI, with a biased AI model, and with a fair AI model. The opportunity cost of developing a fair model is represented by the integral of the value generated with a biased AI model from 0 to the time taken to develop the fair model. For modeling inequality, a value distribution model is used, where the cumulative distribution function (CDF) of value distributed across society is defined by $$f(x) = x^{ \wedge } g$$ f ( x ) = x ∧ g , where x is the percentage of society and g is a parameter that defines the CDF. The study proposes a method to model social value using parameters that define the value models and the opportunity cost of getting a fair model. It will determine the conditions under which society might be better off using a biased AI model and those under which society might be better off waiting for an unbiased AI model.
Suggested Citation
Atul Gupta & Dinesh Verma & Utpal Mangla, 2025.
"Ethical Considerations in AI-Enabled Services,"
Progress in IS, in: Shaun West & Jürg Meierhofer & Thierry Buecheler & Giulia Wally Scurati (ed.), Smart Services Summit, pages 3-20,
Springer.
Handle:
RePEc:spr:prochp:978-3-031-86958-7_1
DOI: 10.1007/978-3-031-86958-7_1
Download full text from publisher
To our knowledge, this item is not available for
download. To find whether it is available, there are three
options:
1. Check below whether another version of this item is available online.
2. Check on the provider's
web page
whether it is in fact available.
3. Perform a
search for a similarly titled item that would be
available.
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:prochp:978-3-031-86958-7_1. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.