IDEAS home Printed from https://ideas.repec.org/p/mse/cesdoc/19012.html
   My bibliography  Save this paper

Artificial Intelligence, Data, Ethics: An Holistic Approach for Risks and Regulation

Author

Abstract

An extensive list of risks relative to big data frameworks and their use through models of artificial intelligence is provided along with measurements and implementable solutions. Bias, interpretability and ethics are studied in depth, with several interpretations from the point of view of developers, companies and regulators. Reflexions suggest that fragmented frameworks increase the risks of models misspecification, opacity and bias in the result; Domain experts and statisticians need to be involved in the whole process as the business objective must drive each decision from the data extraction step to the final activatable prediction. We propose an holistic and original approach to take into account the risks encountered all along the implementation of systems using artificial intelligence from the choice of the data and the selection of the algorithm, to the decision making

Suggested Citation

  • Alexis Bogroff & Dominique Guégan, 2019. "Artificial Intelligence, Data, Ethics: An Holistic Approach for Risks and Regulation," Documents de travail du Centre d'Economie de la Sorbonne 19012, Université Panthéon-Sorbonne (Paris 1), Centre d'Economie de la Sorbonne.
  • Handle: RePEc:mse:cesdoc:19012
    as

    Download full text from publisher

    File URL: ftp://mse.univ-paris1.fr/pub/mse/CES2019/19012.pdf
    Download Restriction: no
    ---><---

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Dominique Guégan, 2020. "A Note on the Interpretability of Machine Learning Algorithms," Documents de travail du Centre d'Economie de la Sorbonne 20012, Université Panthéon-Sorbonne (Paris 1), Centre d'Economie de la Sorbonne.
    2. Dominique Guegan, 2020. "A Note on the Interpretability of Machine Learning Algorithms," Post-Print halshs-02900929, HAL.
    3. Dominique Guégan, 2020. "A Note on the Interpretability of Machine Learning Algorithms," Working Papers 2020:20, Department of Economics, University of Venice "Ca' Foscari".
    4. Dominique Guegan, 2020. "A Note on the Interpretability of Machine Learning Algorithms," Université Paris1 Panthéon-Sorbonne (Post-Print and Working Papers) halshs-02900929, HAL.

    More about this item

    Keywords

    Artificial Intelligence; Bias; Big Data; Ethics; Governance; Interpretability; Regulation; Risk;
    All these keywords.

    JEL classification:

    • C4 - Mathematical and Quantitative Methods - - Econometric and Statistical Methods: Special Topics
    • C5 - Mathematical and Quantitative Methods - - Econometric Modeling
    • C6 - Mathematical and Quantitative Methods - - Mathematical Methods; Programming Models; Mathematical and Simulation Modeling
    • C8 - Mathematical and Quantitative Methods - - Data Collection and Data Estimation Methodology; Computer Programs
    • D8 - Microeconomics - - Information, Knowledge, and Uncertainty
    • G28 - Financial Economics - - Financial Institutions and Services - - - Government Policy and Regulation
    • G38 - Financial Economics - - Corporate Finance and Governance - - - Government Policy and Regulation
    • K2 - Law and Economics - - Regulation and Business Law

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:mse:cesdoc:19012. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Lucie Label (email available below). General contact details of provider: https://edirc.repec.org/data/cenp1fr.html .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.