IDEAS home Printed from https://ideas.repec.org/a/gam/jftint/v11y2019i5p105-d227823.html
   My bibliography  Save this article

Combining Facial Expressions and Electroencephalography to Enhance Emotion Recognition

Author

Listed:
  • Yongrui Huang

    (School of Software, South China Normal University, Guangzhou 510641, China)

  • Jianhao Yang

    (School of Software, South China Normal University, Guangzhou 510641, China)

  • Siyu Liu

    (School of Software, South China Normal University, Guangzhou 510641, China)

  • Jiahui Pan

    (School of Software, South China Normal University, Guangzhou 510641, China)

Abstract

Emotion recognition plays an essential role in human–computer interaction. Previous studies have investigated the use of facial expression and electroencephalogram (EEG) signals from single modal for emotion recognition separately, but few have paid attention to a fusion between them. In this paper, we adopted a multimodal emotion recognition framework by combining facial expression and EEG, based on a valence-arousal emotional model. For facial expression detection, we followed a transfer learning approach for multi-task convolutional neural network (CNN) architectures to detect the state of valence and arousal. For EEG detection, two learning targets (valence and arousal) were detected by different support vector machine (SVM) classifiers, separately. Finally, two decision-level fusion methods based on the enumerate weight rule or an adaptive boosting technique were used to combine facial expression and EEG. In the experiment, the subjects were instructed to watch clips designed to elicit an emotional response and then reported their emotional state. We used two emotion datasets—a Database for Emotion Analysis using Physiological Signals (DEAP) and MAHNOB-human computer interface (MAHNOB-HCI)—to evaluate our method. In addition, we also performed an online experiment to make our method more robust. We experimentally demonstrated that our method produces state-of-the-art results in terms of binary valence/arousal classification, based on DEAP and MAHNOB-HCI data sets. Besides this, for the online experiment, we achieved 69.75% accuracy for the valence space and 70.00% accuracy for the arousal space after fusion, each of which has surpassed the highest performing single modality (69.28% for the valence space and 64.00% for the arousal space). The results suggest that the combination of facial expressions and EEG information for emotion recognition compensates for their defects as single information sources. The novelty of this work is as follows. To begin with, we combined facial expression and EEG to improve the performance of emotion recognition. Furthermore, we used transfer learning techniques to tackle the problem of lacking data and achieve higher accuracy for facial expression. Finally, in addition to implementing the widely used fusion method based on enumerating different weights between two models, we also explored a novel fusion method, applying boosting technique.

Suggested Citation

  • Yongrui Huang & Jianhao Yang & Siyu Liu & Jiahui Pan, 2019. "Combining Facial Expressions and Electroencephalography to Enhance Emotion Recognition," Future Internet, MDPI, vol. 11(5), pages 1-17, May.
  • Handle: RePEc:gam:jftint:v:11:y:2019:i:5:p:105-:d:227823
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1999-5903/11/5/105/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1999-5903/11/5/105/
    Download Restriction: no
    ---><---

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Marlen Sofía Muñoz & Camilo Ernesto Sarmiento Torres & Ricardo Salazar-Cabrera & Diego M. López & Rubiel Vargas-Cañas, 2022. "Digital Transformation in Epilepsy Diagnosis Using Raw Images and Transfer Learning in Electroencephalograms," Sustainability, MDPI, vol. 14(18), pages 1-16, September.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jftint:v:11:y:2019:i:5:p:105-:d:227823. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.