IDEAS home Printed from https://ideas.repec.org/a/gam/jmathe/v13y2025i11p1731-d1663472.html
   My bibliography  Save this article

Bilinear Learning with Dual-Chain Feature Attention for Multimodal Rumor Detection

Author

Listed:
  • Zheheng Guo

    (School of Earth Resources, China University of Geosciences (Wuhan), Wuhan 430074, China
    These authors contributed equally to this work.)

  • Haonan Liu

    (School of Electronic Information and Communication, Huazhong University of Science and Technology, Wuhan 430074, China
    These authors contributed equally to this work.)

  • Lijiao Zuo

    (School of Bigdata and Software Engineering, Chongqing University, Chongqing 400044, China)

  • Junhao Wen

    (School of Bigdata and Software Engineering, Chongqing University, Chongqing 400044, China)

Abstract

The rapid growth of social media and online information-sharing platforms facilitates the spread of rumors. Accurate rumor detection to minimize manual verification efforts remains a critical research challenge. While multimodal rumor detection leveraging both text and visual data has gained increasing attention due to the diversification of social media content, existing approaches face the following three key limitations: (1) yhey prioritize lexical features of text while neglecting inherent logical inconsistencies in rumor narratives; (2) they treat textual and visual features as independent modalities, failing to model their intrinsic connections; and (3) they overlook semantic incongruities between text and images, which are common in rumor content. This paper proposes a dual-chain multimodal feature learning framework for rumor detection to address these issues. The framework comprehensively extracts rumor content features through the following two parallel processes: a basic semantic feature extraction module that captures fundamental textual and visual semantics, and a logical connection feature learning module that models both the internal logical relationships within text and the cross-modal semantic alignment between text and images. The framework achieves the multi-level fusion of text–image features by integrating modal alignment and cross-modal attention mechanisms. Extensive experiments on the Pheme and Weibo datasets demonstrate that the proposed method performs better than baseline approaches, confirming its effectiveness in detecting multimodal rumors.

Suggested Citation

  • Zheheng Guo & Haonan Liu & Lijiao Zuo & Junhao Wen, 2025. "Bilinear Learning with Dual-Chain Feature Attention for Multimodal Rumor Detection," Mathematics, MDPI, vol. 13(11), pages 1-23, May.
  • Handle: RePEc:gam:jmathe:v:13:y:2025:i:11:p:1731-:d:1663472
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2227-7390/13/11/1731/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2227-7390/13/11/1731/
    Download Restriction: no
    ---><---

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:13:y:2025:i:11:p:1731-:d:1663472. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.