IDEAS home Printed from https://ideas.repec.org/a/gam/jftint/v16y2024i6p213-d1416436.html
   My bibliography  Save this article

Learning Modality Consistency and Difference Information with Multitask Learning for Multimodal Sentiment Analysis

Author

Listed:
  • Cheng Fang

    (Key Laboratory of Civil Aviation Thermal Hazards Prevention and Emergency Response, Civil Aviation University of China, Tianjin 300300, China
    College of Electronic Information and Automation, Civil Aviation University of China, Tianjin 300300, China)

  • Feifei Liang

    (China FAW (Nanjing) Technology Development Co., Ltd., Nanjing 211100, China)

  • Tianchi Li

    (College of Electronic Information and Automation, Civil Aviation University of China, Tianjin 300300, China)

  • Fangheng Guan

    (College of Electronic Information and Automation, Civil Aviation University of China, Tianjin 300300, China)

Abstract

The primary challenge in Multimodal sentiment analysis (MSA) lies in developing robust joint representations that can effectively learn mutual information from diverse modalities. Previous research in this field tends to rely on feature concatenation to obtain joint representations. However, these approaches fail to fully exploit interactive patterns to ensure consistency and differentiation across different modalities. To address this limitation, we propose a novel framework for multimodal sentiment analysis, named CDML (Consistency and Difference using a Multitask Learning network). Specifically, CDML uses an attention mechanism to assign the attention weights of each modality efficiently. Adversarial training is used to obtain consistent information between modalities. Finally, the difference among the modalities is acquired by the multitask learning framework. Experiments on two benchmark MSA datasets, CMU-MOSI and CMU-MOSEI, showcase that our proposed method outperforms the seven existing approaches by at least 1.3% for Acc-2 and 1.7% for F1.

Suggested Citation

  • Cheng Fang & Feifei Liang & Tianchi Li & Fangheng Guan, 2024. "Learning Modality Consistency and Difference Information with Multitask Learning for Multimodal Sentiment Analysis," Future Internet, MDPI, vol. 16(6), pages 1-17, June.
  • Handle: RePEc:gam:jftint:v:16:y:2024:i:6:p:213-:d:1416436
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1999-5903/16/6/213/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1999-5903/16/6/213/
    Download Restriction: no
    ---><---

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jftint:v:16:y:2024:i:6:p:213-:d:1416436. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.