IDEAS home Printed from https://ideas.repec.org/a/aza/airwa0/y2024v3i1p37-46.html
   My bibliography  Save this article

Machine unlearning for generative AI

Author

Listed:
  • Viswanath, Yashaswini

    (Resident Researcher, Business School of AI, USA)

  • Jamthe, Sudha

    (Technology Futurist, Business School of AI, Global South in AI, Stanford Continuing Studies, Barcelona Technology School, Elisava School of Engineering and Design, USA)

  • Lokiah, Suresh

    (Senior Engineering Manager, Zebra Technologies, USA)

  • Bianchini, Emanuele

    (Senior Director, Technology & Innovation, Consumer Technology Group, Flex, USA)

Abstract

This paper introduces a new field of AI research called machine unlearning and examines the challenges and approaches to extend machine unlearning to generative AI (GenAI). Machine unlearning is a model-driven approach to make an existing artificial intelligence (AI) model unlearn a set of data from its learning. Machine unlearning is becoming important for businesses to comply with privacy laws such as General Data Protection Regulation (GDPR) customer’s right to be forgotten, to manage security and to remove bias that AI models learn from their training data, as it is expensive to retrain and deploy the models without the bias or security or privacy compromising data. This paper presents the state of the art in machine unlearning approaches such as exact unlearning, approximate unlearning, zero-shot learning (ZSL) and fast and efficient unlearning. The paper highlights the challenges in applying machine learning to GenAI which is built on a transformer architecture of neural networks and adds more opaqueness to how large language models (LLM) learn in pre-training, fine-turning, transfer learning to more languages and in inference. The paper elaborates on how models retain the learning in a neural network to guide the various machine unlearning approaches for GenAI that the authors hope can be built upon their work. The paper suggests possible futuristic directions of research to create transparency in LLM and particularly looks at hallucinations in LLMs when they are extended to do machine translation for new languages beyond their training with ZSL to shed light on how the model stores its learning of newer languages in its memory and how it draws upon it during inference in GenAI applications. Finally, the paper calls for collaborations for future research in machine unlearning for GenAI, particularly LLMs, to add transparency and inclusivity to language AI.

Suggested Citation

  • Viswanath, Yashaswini & Jamthe, Sudha & Lokiah, Suresh & Bianchini, Emanuele, 2024. "Machine unlearning for generative AI," Journal of AI, Robotics & Workplace Automation, Henry Stewart Publications, vol. 3(1), pages 37-46, September.
  • Handle: RePEc:aza:airwa0:y:2024:v:3:i:1:p:37-46
    as

    Download full text from publisher

    File URL: https://hstalks.com/article/8325/download/
    Download Restriction: Requires a paid subscription for full access.

    File URL: https://hstalks.com/article/8325/
    Download Restriction: Requires a paid subscription for full access.
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    More about this item

    Keywords

    machine unlearning; privacy; right to be forgotten; generative AI; fine-tuning; large language models; LLM; zero shot learning; explainability;
    All these keywords.

    JEL classification:

    • M15 - Business Administration and Business Economics; Marketing; Accounting; Personnel Economics - - Business Administration - - - IT Management
    • G2 - Financial Economics - - Financial Institutions and Services

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:aza:airwa0:y:2024:v:3:i:1:p:37-46. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Henry Stewart Talks (email available below). General contact details of provider: .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.