IDEAS home Printed from https://ideas.repec.org/a/gam/jmathe/v10y2022i12p2126-d842323.html
   My bibliography  Save this article

A Compact Parallel Pruning Scheme for Deep Learning Model and Its Mobile Instrument Deployment

Author

Listed:
  • Meng Li

    (School of Computer Science, Yangtze University, Jingzhou 434025, China)

  • Ming Zhao

    (School of Computer Science, Yangtze University, Jingzhou 434025, China)

  • Tie Luo

    (School of Computer Science, Yangtze University, Jingzhou 434025, China)

  • Yimin Yang

    (School of Computer Science, Yangtze University, Jingzhou 434025, China)

  • Sheng-Lung Peng

    (Department of Creative Technologies and Product Design, National Taipei University of Business, Taipei 10051, Taiwan)

Abstract

In the single pruning algorithm, channel pruning or filter pruning is used to compress the deep convolution neural network, and there are still many redundant parameters in the compressed model. Directly pruning the filter will largely cause the loss of key information and affect the accuracy of model classification. To solve these problems, a parallel pruning algorithm combined with image enhancement is proposed. Firstly, in order to improve the generalization ability of the model, a data enhancement method of random erasure is introduced. Secondly, according to the trained batch normalization layer scaling factor, the channels with small contribution are cut off, the model is initially thinned, and then the filters are pruned. By calculating the geometric median of the filters, redundant filters similar to them are found and pruned, and their similarity is measured by calculating the distance between filters. Pruning was done using VGG19 and DenseNet40 on cifar10 and cifar100 data sets. The experimental results show that this algorithm can improve the accuracy of the model, and at the same time, it can compress the calculation and parameters of the model to a certain extent. Finally, this method is applied in practice, and combined with transfer learning, traffic objects are classified and detected on the mobile phone.

Suggested Citation

  • Meng Li & Ming Zhao & Tie Luo & Yimin Yang & Sheng-Lung Peng, 2022. "A Compact Parallel Pruning Scheme for Deep Learning Model and Its Mobile Instrument Deployment," Mathematics, MDPI, vol. 10(12), pages 1-17, June.
  • Handle: RePEc:gam:jmathe:v:10:y:2022:i:12:p:2126-:d:842323
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2227-7390/10/12/2126/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2227-7390/10/12/2126/
    Download Restriction: no
    ---><---

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:10:y:2022:i:12:p:2126-:d:842323. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.