IDEAS home Printed from https://ideas.repec.org/a/gam/jmathe/v11y2023i2p338-d1029391.html
   My bibliography  Save this article

Automatic Compression of Neural Network with Deep Reinforcement Learning Based on Proximal Gradient Method

Author

Listed:
  • Mingyi Wang

    (School of Automation, Guangdong University of Technology, Guangzhou 510006, China
    Guangdong-Hong Kong-Macao Joint Laboratory for Smart Discrete Manufacturing, Guangzhou 510006, China)

  • Jianhao Tang

    (School of Automation, Guangdong University of Technology, Guangzhou 510006, China
    Guangdong-Hong Kong-Macao Joint Laboratory for Smart Discrete Manufacturing, Guangzhou 510006, China)

  • Haoli Zhao

    (School of Automation, Guangdong University of Technology, Guangzhou 510006, China
    111 Center for Intelligent Batch Manufacturing Based on IoT Technology (GDUT), Guangzhou 510006, China)

  • Zhenni Li

    (School of Automation, Guangdong University of Technology, Guangzhou 510006, China
    Guangdong-Hong Kong-Macao Joint Laboratory for Smart Discrete Manufacturing, Guangzhou 510006, China)

  • Shengli Xie

    (Key Laboratory of Intelligent Detection and The Internet of Things in Manufacturing, Guangzhou 510006, China
    Guangdong Key Laboratory of IoT Information Technology, Guangzhou 510006, China)

Abstract

In recent years, the model compression technique is very effective for deep neural network compression. However, many existing model compression methods rely heavily on human experience to explore a compression strategy between network structure, speed, and accuracy, which is usually suboptimal and time-consuming. In this paper, we propose a framework for automatically compressing models through the actor–critic structured deep reinforcement learning (DRL) which interacts with each layer in the neural network, where the actor network determines the compression strategy and the critic network ensures the decision accuracy of the actor network through predicted values, thus improving the compression quality of the network. To enhance the prediction performance of the critic network, we impose the L 1 norm regularizer on the weights of the critic network to obtain a distinct activation output feature on the representation, thus enhancing the prediction accuracy of the critic network. Moreover, to improve the decision performance of the actor network, we impose the L 1 norm regularizer on the weights of the actor network to improve the decision accuracy of the actor network by removing the redundant weights in the actor network. Furthermore, to improve the training efficiency, we use the proximal gradient method to optimize the weights of the actor network and the critic network, which can obtain an effective weight solution and thus improve the compression performance. In the experiment, in MNIST datasets, the proposed method has only a 0.2% loss of accuracy when compressing more than 70% of neurons. Similarly, in CIFAR-10 datasets, the proposed method compresses more than 60% of neurons, with only 7.1% accuracy loss, which is superior to other existing methods. In terms of efficiency, the proposed method also cost the lowest time among the existing methods.

Suggested Citation

  • Mingyi Wang & Jianhao Tang & Haoli Zhao & Zhenni Li & Shengli Xie, 2023. "Automatic Compression of Neural Network with Deep Reinforcement Learning Based on Proximal Gradient Method," Mathematics, MDPI, vol. 11(2), pages 1-19, January.
  • Handle: RePEc:gam:jmathe:v:11:y:2023:i:2:p:338-:d:1029391
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2227-7390/11/2/338/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2227-7390/11/2/338/
    Download Restriction: no
    ---><---

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:11:y:2023:i:2:p:338-:d:1029391. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.