IDEAS home Printed from https://ideas.repec.org/a/gam/jmathe/v11y2023i18p3848-d1235675.html
   My bibliography  Save this article

3D-ShuffleViT: An Efficient Video Action Recognition Network with Deep Integration of Self-Attention and Convolution

Author

Listed:
  • Yinghui Wang

    (School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China)

  • Anlei Zhu

    (School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China)

  • Haomiao Ma

    (School of Computer Science, Shaanxi Normal University, Xi’an 710119, China)

  • Lingyu Ai

    (School of Internet of Things Engineering, Jiangnan University, Wuxi 214122, China)

  • Wei Song

    (School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China)

  • Shaojie Zhang

    (School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China)

Abstract

Compared with traditional methods, the action recognition model based on 3D convolutional deep neural network captures spatio-temporal features more accurately, resulting in higher accuracy. However, the large number of parameters and computational requirements of 3D models make it difficult to deploy on mobile devices with limited computing power. In order to achieve an efficient video action recognition model, we have analyzed and compared classic lightweight network principles and proposed the 3D-ShuffleViT network. By deeply integrating the self-attention mechanism with convolution, we have introduced an efficient ACISA module that further enhances the performance of our proposed model. This has resulted in exceptional performance in both context-sensitive and context-independent action recognition, while reducing deployment costs. It is worth noting that our 3D-ShuffleViT network, with a computational cost of only 6% of that of SlowFast-ResNet101, achieved 98% of the latter’s Top1 accuracy on the EgoGesture dataset. Furthermore, on the same CPU (Intel i5-8300H), its speed was 2.5 times that of the latter. In addition, when we deployed our model on edge devices, our proposed network achieved the best balance between accuracy and speed among lightweight networks of the same order.

Suggested Citation

  • Yinghui Wang & Anlei Zhu & Haomiao Ma & Lingyu Ai & Wei Song & Shaojie Zhang, 2023. "3D-ShuffleViT: An Efficient Video Action Recognition Network with Deep Integration of Self-Attention and Convolution," Mathematics, MDPI, vol. 11(18), pages 1-18, September.
  • Handle: RePEc:gam:jmathe:v:11:y:2023:i:18:p:3848-:d:1235675
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2227-7390/11/18/3848/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2227-7390/11/18/3848/
    Download Restriction: no
    ---><---

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:11:y:2023:i:18:p:3848-:d:1235675. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.