IDEAS home Printed from https://ideas.repec.org/a/hin/complx/9428612.html
   My bibliography  Save this article

Deep Ensemble Learning for Human Action Recognition in Still Images

Author

Listed:
  • Xiangchun Yu
  • Zhe Zhang
  • Lei Wu
  • Wei Pang
  • Hechang Chen
  • Zhezhou Yu
  • Bin Li

Abstract

Numerous human actions such as “Phoning,” “PlayingGuitar,” and “RidingHorse” can be inferred by static cue-based approaches even if their motions in video are available considering one single still image may already sufficiently explain a particular action. In this research, we investigate human action recognition in still images and utilize deep ensemble learning to automatically decompose the body pose and perceive its background information. Firstly, we construct an end-to-end NCNN-based model by attaching the nonsequential convolutional neural network (NCNN) module to the top of the pretrained model. The nonsequential network topology of NCNN can separately learn the spatial- and channel-wise features with parallel branches, which helps improve the model performance. Subsequently, in order to further exploit the advantage of the nonsequential topology, we propose an end-to-end deep ensemble learning based on the weight optimization (DELWO) model. It contributes to fusing the deep information derived from multiple models automatically from the data. Finally, we design the deep ensemble learning based on voting strategy (DELVS) model to pool together multiple deep models with weighted coefficients to obtain a better prediction. More importantly, the model complexity can be reduced by lessening the number of trainable parameters, thereby effectively mitigating overfitting issues of the model in small datasets to some extent. We conduct experiments in Li’s action dataset, uncropped and 1.5x cropped Willow action datasets, and the results have validated the effectiveness and robustness of our proposed models in terms of mitigating overfitting issues in small datasets. Finally, we open source our code for the model in GitHub ( https://github.com/yxchspring/deep_ensemble_learning ) in order to share our model with the community.

Suggested Citation

  • Xiangchun Yu & Zhe Zhang & Lei Wu & Wei Pang & Hechang Chen & Zhezhou Yu & Bin Li, 2020. "Deep Ensemble Learning for Human Action Recognition in Still Images," Complexity, Hindawi, vol. 2020, pages 1-23, January.
  • Handle: RePEc:hin:complx:9428612
    DOI: 10.1155/2020/9428612
    as

    Download full text from publisher

    File URL: http://downloads.hindawi.com/journals/8503/2020/9428612.pdf
    Download Restriction: no

    File URL: http://downloads.hindawi.com/journals/8503/2020/9428612.xml
    Download Restriction: no

    File URL: https://libkey.io/10.1155/2020/9428612?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Yuewen Yang & Dongyan Wang & Zhuoran Yan & Shuwen Zhang, 2021. "Delineating Urban Functional Zones Using U-Net Deep Learning: Case Study of Kuancheng District, Changchun, China," Land, MDPI, vol. 10(11), pages 1-21, November.

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:hin:complx:9428612. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Mohamed Abdelhakeem (email available below). General contact details of provider: https://www.hindawi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.