IDEAS home Printed from https://ideas.repec.org/p/osf/lawarx/wm6yk.html
   My bibliography  Save this paper

Some HCI Priorities for GDPR-Compliant Machine Learning

Author

Listed:
  • Veale, Michael
  • Binns, Reuben
  • Van Kleek, Max

Abstract

Cite as Michael Veale, Reuben Binns and Max Van Kleek (2018) Some HCI Priorities for GDPR-Compliant Machine Learning. The General Data Protection Regulation: An Opportunity for the CHI Community? (CHI-GDPR 2018), Workshop at ACM CHI'18, 22 April 2018, Montreal, Canada. In this short paper, we consider the roles of HCI in enabling the better governance of consequential machine learning systems using the rights and obligations laid out in the recent 2016 EU General Data Protection Regulation (GDPR)---a law which involves heavy interaction with people and systems. Focussing on those areas that relate to algorithmic systems in society, we propose roles for HCI in legal contexts in relation to fairness, bias and discrimination; data protection by design; data protection impact assessments; transparency and explanations; the mitigation and understanding of automation bias; and the communication of envisaged consequences of processing.

Suggested Citation

  • Veale, Michael & Binns, Reuben & Van Kleek, Max, 2018. "Some HCI Priorities for GDPR-Compliant Machine Learning," LawArXiv wm6yk, Center for Open Science.
  • Handle: RePEc:osf:lawarx:wm6yk
    DOI: 10.31219/osf.io/wm6yk
    as

    Download full text from publisher

    File URL: https://osf.io/download/5aafb81180f2d3000d5a38ae/
    Download Restriction: no

    File URL: https://libkey.io/10.31219/osf.io/wm6yk?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Veale, Michael & Binns, Reuben, 2017. "Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data," SocArXiv ustxg, Center for Open Science.
    2. Edwards, Lilian & Veale, Michael, 2017. "Slave to the Algorithm? Why a 'right to an explanation' is probably not the remedy you are looking for," LawArXiv 97upg, Center for Open Science.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Veale, Michael & Van Kleek, Max & Binns, Reuben, 2018. "Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making," SocArXiv 8kvf4, Center for Open Science.
    2. Kira J.M. Matus & Michael Veale, 2022. "Certification systems for machine learning: Lessons from sustainability," Regulation & Governance, John Wiley & Sons, vol. 16(1), pages 177-196, January.
    3. Matus, Kira & Veale, Michael, 2021. "Certification Systems for Machine Learning: Lessons from Sustainability," SocArXiv pm3wy, Center for Open Science.
    4. König, Pascal D. & Wenzelburger, Georg, 2021. "The legitimacy gap of algorithmic decision-making in the public sector: Why it arises and how to address it," Technology in Society, Elsevier, vol. 67(C).
    5. Alina Köchling & Marius Claus Wehner, 2020. "Discriminated by an algorithm: a systematic review of discrimination and fairness by algorithmic decision-making in the context of HR recruitment and HR development," Business Research, Springer;German Academic Association for Business Research, vol. 13(3), pages 795-848, November.
    6. Hazel Si Min Lim & Araz Taeihagh, 2019. "Algorithmic Decision-Making in AVs: Understanding Ethical and Technical Concerns for Smart Cities," Sustainability, MDPI, vol. 11(20), pages 1-28, October.
    7. Buhmann, Alexander & Fieseler, Christian, 2021. "Towards a deliberative framework for responsible innovation in artificial intelligence," Technology in Society, Elsevier, vol. 64(C).
    8. Cobbe, Jennifer & Veale, Michael & Singh, Jatinder, 2023. "Understanding Accountability in Algorithmic Supply Chains," SocArXiv p4sey, Center for Open Science.
    9. Kirsten Martin & Ari Waldman, 2023. "Are Algorithmic Decisions Legitimate? The Effect of Process and Outcomes on Perceptions of Legitimacy of AI Decisions," Journal of Business Ethics, Springer, vol. 183(3), pages 653-670, March.
    10. Vesnic-Alujevic, Lucia & Nascimento, Susana & Pólvora, Alexandre, 2020. "Societal and ethical impacts of artificial intelligence: Critical notes on European policy frameworks," Telecommunications Policy, Elsevier, vol. 44(6).
    11. Veale, Michael, 2017. "Logics and practices of transparency and opacity in real-world applications of public sector machine learning," SocArXiv 6cdhe, Center for Open Science.
    12. Mazur Joanna, 2019. "Automated Decision-Making and the Precautionary Principle in EU Law," TalTech Journal of European Studies, Sciendo, vol. 9(4), pages 3-18, December.
    13. Daniela Sele & Marina Chugunova, 2023. "Putting a Human in the Loop: Increasing Uptake, but Decreasing Accuracy of Automated Decision-Making," Rationality and Competition Discussion Paper Series 438, CRC TRR 190 Rationality and Competition.
    14. Frederik Zuiderveen Borgesius & Joost Poort, 2017. "Online Price Discrimination and EU Data Privacy Law," Journal of Consumer Policy, Springer, vol. 40(3), pages 347-366, September.
    15. Larisa Găbudeanu & Iulia Brici & Codruța Mare & Ioan Cosmin Mihai & Mircea Constantin Șcheau, 2021. "Privacy Intrusiveness in Financial-Banking Fraud Detection," Risks, MDPI, vol. 9(6), pages 1-22, June.
    16. Rolf H. Weber, 2021. "Artificial Intelligence ante portas: Reactions of Law," J, MDPI, vol. 4(3), pages 1-14, September.
    17. I. Ooijen & Helena U. Vrabec, 2019. "Does the GDPR Enhance Consumers’ Control over Personal Data? An Analysis from a Behavioural Perspective," Journal of Consumer Policy, Springer, vol. 42(1), pages 91-107, March.
    18. Janssen, Patrick & Sadowski, Bert M., 2021. "Bias in Algorithms: On the trade-off between accuracy and fairness," 23rd ITS Biennial Conference, Online Conference / Gothenburg 2021. Digital societies and industrial transformations: Policies, markets, and technologies in a post-Covid world 238032, International Telecommunications Society (ITS).
    19. Irene Unceta & Jordi Nin & Oriol Pujol, 2020. "Risk mitigation in algorithmic accountability: The role of machine learning copies," PLOS ONE, Public Library of Science, vol. 15(11), pages 1-26, November.
    20. Vasiliki Koniakou, 2023. "From the “rush to ethics” to the “race for governance” in Artificial Intelligence," Information Systems Frontiers, Springer, vol. 25(1), pages 71-102, February.

    More about this item

    NEP fields

    This paper has been announced in the following NEP Reports:

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:osf:lawarx:wm6yk. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: OSF (email available below). General contact details of provider: https://osf.io/preprints/lawarxiv/discover .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.