IDEAS home Printed from https://ideas.repec.org/a/igg/jmdem0/v7y2016i1p60-76.html
   My bibliography  Save this article

Audiovisual Facial Action Unit Recognition using Feature Level Fusion

Author

Listed:
  • Zibo Meng

    (University of South Carolina, Columbia, SC, USA)

  • Shizhong Han

    (University of South Carolina, Columbia, SC, USA)

  • Min Chen

    (Computing and Software Systems, School of STEM, University of Washington Bothell, Bothell, WA, USA)

  • Yan Tong

    (University of South Carolina, Columbia, SC, USA)

Abstract

Recognizing facial actions is challenging, especially when they are accompanied with speech. Instead of employing information solely from the visual channel, this work aims to exploit information from both visual and audio channels in recognizing speech-related facial action units (AUs). In this work, two feature-level fusion methods are proposed. The first method is based on a kind of human-crafted visual feature. The other method utilizes visual features learned by a deep convolutional neural network (CNN). For both methods, features are independently extracted from visual and audio channels and aligned to handle the difference in time scales and the time shift between the two signals. These temporally aligned features are integrated via feature-level fusion for AU recognition. Experimental results on a new audiovisual AU-coded dataset have demonstrated that both fusion methods outperform their visual counterparts in recognizing speech-related AUs. The improvement is more impressive with occlusions on the facial images, which would not affect the audio channel.

Suggested Citation

  • Zibo Meng & Shizhong Han & Min Chen & Yan Tong, 2016. "Audiovisual Facial Action Unit Recognition using Feature Level Fusion," International Journal of Multimedia Data Engineering and Management (IJMDEM), IGI Global, vol. 7(1), pages 60-76, January.
  • Handle: RePEc:igg:jmdem0:v:7:y:2016:i:1:p:60-76
    as

    Download full text from publisher

    File URL: http://services.igi-global.com/resolvedoi/resolve.aspx?doi=10.4018/IJMDEM.2016010104
    Download Restriction: no
    ---><---

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:igg:jmdem0:v:7:y:2016:i:1:p:60-76. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Journal Editor (email available below). General contact details of provider: https://www.igi-global.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.