IDEAS home Printed from https://ideas.repec.org/a/kap/jcopol/v48y2025i3d10.1007_s10603-025-09586-1.html
   My bibliography  Save this article

A Right to Constructive Optimization: A Public Interest Approach to Recommender Systems in the Digital Services Act

Author

Listed:
  • L. Naudts

    (University of Amsterdam
    KU Leuven)

  • N. Helberger

    (University of Amsterdam)

  • M. Veale

    (University of Amsterdam
    University College London)

  • M. Sax

    (University of Amsterdam)

Abstract

The technological promise of recommender systems should not be misused by those with decisional power over the infrastructural, data, and knowledge resources needed for their design. The ideal of personalization should not mask self-serving optimization. Instead, we propose that people, not only in their capacity as consumers but, more generally, as democratic citizens, have a legitimate claim to ensure that very large online platforms (or VLOPs) respect their interests within optimization processes through the content policy strategies and recommendation technologies they employ. To this end, this paper argues for, and develops, a right to constructive optimization that promotes people’s effective enjoyment of fundamental rights and civic values in digital settings. The argument is structured as follows. First, the paper strengthens the claim that the largest online platforms perform a public function (although this is not the only way such functions can be performed). Second, drawing from the philosophy of Iris Marion Young, the paper identifies self-determination and self-development as key values recommenders should promote as part of this crucial function under conditions of inclusivity, political equality, reasonableness, and publicity. After having critiqued the EU Digital Services Act’s approach toward regulating the function recommenders hold, the right to constructive optimization is concretized as an alternative normative benchmark and used as an interpretative lens to enrich ongoing legal initiatives.

Suggested Citation

  • L. Naudts & N. Helberger & M. Veale & M. Sax, 2025. "A Right to Constructive Optimization: A Public Interest Approach to Recommender Systems in the Digital Services Act," Journal of Consumer Policy, Springer, vol. 48(3), pages 269-296, September.
  • Handle: RePEc:kap:jcopol:v:48:y:2025:i:3:d:10.1007_s10603-025-09586-1
    DOI: 10.1007/s10603-025-09586-1
    as

    Download full text from publisher

    File URL: http://link.springer.com/10.1007/s10603-025-09586-1
    File Function: Abstract
    Download Restriction: Access to full text is restricted to subscribers.

    File URL: https://libkey.io/10.1007/s10603-025-09586-1?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to

    for a different version of it.

    References listed on IDEAS

    as
    1. Mahieu, René L. P. & Asghari, Hadi & van Eeten, Michel, 2018. "Collectively exercising the right of access: individual effort, societal effect," Internet Policy Review: Journal on Internet Regulation, Alexander von Humboldt Institute for Internet and Society (HIIG), Berlin, vol. 7(3), pages 1-23.
    2. Edwards, Lilian & Veale, Michael, 2017. "Slave to the Algorithm? Why a 'right to an explanation' is probably not the remedy you are looking for," LawRxiv 97upg, Center for Open Science.
    3. Rieder, Bernhard & Hofmann, Jeanette, 2020. "Towards platform observability," Internet Policy Review: Journal on Internet Regulation, Alexander von Humboldt Institute for Internet and Society (HIIG), Berlin, vol. 9(4), pages 1-28.
    4. Edwards, Lilian & Veale, Michael, 2017. "Slave to the Algorithm? Why a 'right to an explanation' is probably not the remedy you are looking for," LawArchive 97upg_v1, Center for Open Science.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.
    1. Volosevici Dana, 2025. "Human Resources and GDPR Compliance: Lessons from Romanian Data Protection Case Law on Workplace Privacy," Proceedings of the International Conference on Business Excellence, Sciendo, vol. 19(1), pages 4329-4344.
    2. van de Kerkhof, Jacob, 2025. "Article 22 Digital Services Act: Building trust with trusted flaggers," Internet Policy Review: Journal on Internet Regulation, Alexander von Humboldt Institute for Internet and Society (HIIG), Berlin, vol. 14(1), pages 1-26.
    3. Duan Bo & Aini Azeqa Marof & Zeinab Zaremohzzabieh, 2025. "The Influence of Negative Stereotypes in Science Fiction and Fantasy on Public Perceptions of Artificial Intelligence: A Systematic Review," Studies in Media and Communication, Redfame publishing, vol. 13(1), pages 180-190, March.
    4. Daniela Sele & Marina Chugunova, 2024. "Putting a human in the loop: Increasing uptake, but decreasing accuracy of automated decision-making," PLOS ONE, Public Library of Science, vol. 19(2), pages 1-14, February.
    5. Robert Epstein & Alex Flores, 2024. "The Video Manipulation Effect (VME): A quantification of the possible impact that the ordering of YouTube videos might have on opinions and voting preferences," PLOS ONE, Public Library of Science, vol. 19(11), pages 1-25, November.
    6. repec:osf:socarx:fbu27_v1 is not listed on IDEAS
    7. König, Pascal D. & Wenzelburger, Georg, 2021. "The legitimacy gap of algorithmic decision-making in the public sector: Why it arises and how to address it," Technology in Society, Elsevier, vol. 67(C).
    8. Vasiliki Koniakou, 2023. "From the “rush to ethics” to the “race for governance” in Artificial Intelligence," Information Systems Frontiers, Springer, vol. 25(1), pages 71-102, February.
    9. repec:osf:socarx:pm3wy_v1 is not listed on IDEAS
    10. Ye, Xiongbiao & Yan, Yuhong & Li, Jia & Jiang, Bo, 2024. "Privacy and personal data risk governance for generative artificial intelligence: A Chinese perspective," Telecommunications Policy, Elsevier, vol. 48(10).
    11. Koefer, Franziska & Lemken, Ivo & Pauls, Jan, 2023. "Fairness in algorithmic decision systems: A microfinance perspective," EIF Working Paper Series 2023/88, European Investment Fund (EIF).
    12. Hazel Si Min Lim & Araz Taeihagh, 2019. "Algorithmic Decision-Making in AVs: Understanding Ethical and Technical Concerns for Smart Cities," Sustainability, MDPI, vol. 11(20), pages 1-28, October.
    13. Buhmann, Alexander & Fieseler, Christian, 2021. "Towards a deliberative framework for responsible innovation in artificial intelligence," Technology in Society, Elsevier, vol. 64(C).
    14. Veale, Michael & Binns, Reuben & Van Kleek, Max, 2018. "Some HCI Priorities for GDPR-Compliant Machine Learning," LawRxiv wm6yk, Center for Open Science.
    15. Cobbe, Jennifer & Veale, Michael & Singh, Jatinder, 2023. "Understanding Accountability in Algorithmic Supply Chains," SocArXiv p4sey, Center for Open Science.
    16. Kirsten Martin & Ari Waldman, 2023. "Are Algorithmic Decisions Legitimate? The Effect of Process and Outcomes on Perceptions of Legitimacy of AI Decisions," Journal of Business Ethics, Springer, vol. 183(3), pages 653-670, March.
    17. Veale, Michael & Binns, Reuben & Van Kleek, Max, 2018. "Some HCI Priorities for GDPR-Compliant Machine Learning," LawArchive wm6yk_v1, Center for Open Science.
    18. Gorwa, Robert, 2019. "What is Platform Governance?," SocArXiv fbu27, Center for Open Science.
    19. Vesnic-Alujevic, Lucia & Nascimento, Susana & Pólvora, Alexandre, 2020. "Societal and ethical impacts of artificial intelligence: Critical notes on European policy frameworks," Telecommunications Policy, Elsevier, vol. 44(6).
    20. Veale, Michael, 2017. "Logics and practices of transparency and opacity in real-world applications of public sector machine learning," SocArXiv 6cdhe, Center for Open Science.
    21. Söderlund, Kasia & Engström, Emma & Haresamudram, Kashyap & Larsson, Stefan & Strimling, Pontus, 2024. "Regulating high-reach AI: On transparency directions in the Digital Services Act," Internet Policy Review: Journal on Internet Regulation, Alexander von Humboldt Institute for Internet and Society (HIIG), Berlin, vol. 13(1), pages 1-31.
    22. Tobias D. Krafft & Katharina A. Zweig & Pascal D. König, 2022. "How to regulate algorithmic decision‐making: A framework of regulatory requirements for different applications," Regulation & Governance, John Wiley & Sons, vol. 16(1), pages 119-136, January.

    More about this item

    Keywords

    ;
    ;
    ;
    ;
    ;
    ;
    ;
    ;
    ;
    ;

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:kap:jcopol:v:48:y:2025:i:3:d:10.1007_s10603-025-09586-1. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.