IDEAS home Printed from https://ideas.repec.org/h/spr/sprchp/978-3-031-83157-7_16.html
   My bibliography  Save this book chapter

On the Steganographic Capacity of Selected Learning Models

In: Machine Learning, Deep Learning and AI for Cybersecurity

Author

Listed:
  • Rishit Agrawal

    (San Jose State University)

  • Kelvin Jou

    (San Jose State University)

  • Tanush Obili

    (San Jose State University)

  • Daksh Parikh

    (San Jose State University)

  • Samarth Prajapati

    (San Jose State University)

  • Yash Seth

    (San Jose State University)

  • Charan Sridhar

    (San Jose State University)

  • Nathan Zhang

    (San Jose State University)

  • Mark Stamp

    (San Jose State University)

Abstract

Machine learning and deep learning models are potential vectors for various attack scenarios. For example, previous research has shown that malware can be hidden in deep learning models. Hiding information in a learning model can be viewed as a form of steganography. In this research, we consider the general question of the steganographic capacity of learning models. Specifically, for a wide range of models, we determine the number of low-order bits of the trained parameters that can be overwritten, without adversely affecting model performance. For each model considered, we graph the accuracy as a function of the number of low-order bits that have been overwritten, and for selected models, we also analyze the steganographic capacity of individual layers. The models that we test include classic machine learning techniques, popular general deep learning models, pre-trained transfer learning-based models, and others. In all cases, we find that a majority of the bits of each trained parameter can be overwritten before the accuracy degrades. Of the models tested, the steganographic capacity ranges from 7.04 KB to 44.74 MB. We discuss the implications of our results and consider possible avenues for further research.

Suggested Citation

  • Rishit Agrawal & Kelvin Jou & Tanush Obili & Daksh Parikh & Samarth Prajapati & Yash Seth & Charan Sridhar & Nathan Zhang & Mark Stamp, 2025. "On the Steganographic Capacity of Selected Learning Models," Springer Books, in: Mark Stamp & Martin Jureček (ed.), Machine Learning, Deep Learning and AI for Cybersecurity, pages 457-491, Springer.
  • Handle: RePEc:spr:sprchp:978-3-031-83157-7_16
    DOI: 10.1007/978-3-031-83157-7_16
    as

    Download full text from publisher

    To our knowledge, this item is not available for download. To find whether it is available, there are three options:
    1. Check below whether another version of this item is available online.
    2. Check on the provider's web page whether it is in fact available.
    3. Perform a
    for a similarly titled item that would be available.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:sprchp:978-3-031-83157-7_16. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.