IDEAS home Printed from https://ideas.repec.org/a/igg/jaci00/v13y2022i1p1-24.html
   My bibliography  Save this article

Continuous Attention Mechanism Embedded (CAME) Bi-Directional Long Short-Term Memory Model for Fake News Detection

Author

Listed:
  • Anshika Choudhary

    (Jaypee Institute of Information Technology, Noida, India)

  • Anuja Arora

    (Jaypee Institute of Information Technology, Noida, India)

Abstract

The credible analysis of news on social media due to the fact of spreading unnecessary restlessness and reluctance in the community is a need. Numerous individual or social media marketing entities radiate inauthentic news through online social media. Henceforth, delineating these activities on social media and the apparent identification of delusive content is a challenging task. This work projected a continuous attention-driven memory-based deep learning model to predict the credibility of an article. To exhibit the importance of continuous attention, research work is presented in accretive exaggeration mode. Initially, long short-term memory (LSTM)-based deep learning model has been applied, which is extended by merging the concept of bidirectional LSTM for fake news identification. This research work proposed a continuous attention mechanism embedded (CAME)-bidirectional LSTM model for predicting the nature of news. Result shows the proposed CAME model outperforms the performance as compared to LSTM and the bidirectional LSTM model.

Suggested Citation

  • Anshika Choudhary & Anuja Arora, 2022. "Continuous Attention Mechanism Embedded (CAME) Bi-Directional Long Short-Term Memory Model for Fake News Detection," International Journal of Ambient Computing and Intelligence (IJACI), IGI Global, vol. 13(1), pages 1-24, January.
  • Handle: RePEc:igg:jaci00:v:13:y:2022:i:1:p:1-24
    as

    Download full text from publisher

    File URL: http://services.igi-global.com/resolvedoi/resolve.aspx?doi=10.4018/IJACI.309407
    Download Restriction: no
    ---><---

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:igg:jaci00:v:13:y:2022:i:1:p:1-24. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Journal Editor (email available below). General contact details of provider: https://www.igi-global.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.