IDEAS home Printed from https://ideas.repec.org/a/wsi/fracta/v31y2023i06ns0218348x23401333.html
   My bibliography  Save this article

Ftn–Vqa: Multimodal Reasoning By Leveraging A Fully Transformer-Based Network For Visual Question Answering

Author

Listed:
  • RUNMIN WANG

    (Institute of Information Science and Engineering, Hunan Normal University, Changsha 410081, P. R. China)

  • WEIXIANG XU

    (Institute of Information Science and Engineering, Hunan Normal University, Changsha 410081, P. R. China)

  • YANBIN ZHU

    (Institute of Information Science and Engineering, Hunan Normal University, Changsha 410081, P. R. China)

  • ZHENLIN ZHU

    (Institute of Information Science and Engineering, Hunan Normal University, Changsha 410081, P. R. China)

  • HUA CHEN

    (Institute of Information Science and Engineering, Hunan Normal University, Changsha 410081, P. R. China)

  • YAJUN DING

    (Institute of Information Science and Engineering, Hunan Normal University, Changsha 410081, P. R. China)

  • JINPING LIU

    (Institute of Information Science and Engineering, Hunan Normal University, Changsha 410081, P. R. China)

  • CHANGXIN GAO

    (��School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, P. R. China)

  • NONG SANG

    (��School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, P. R. China)

Abstract

Visual Question Answering (VQA) is a multimodal task, which requires understanding the information in the natural language questions and paying attention to the useful information in the images. So far, the solution of VQA tasks can be divided into grid-based methods and bottom-up-based methods. The grid-based method directly extracts the semantic features of the image by leveraging the convolution neural network (CNN), so it has a praiseworthy computational efficiency, but the global convolution feature ignores the information of the key area and causes the performance bottleneck. The bottom-up-based method needs to detect potentially problem-related objects by using some object detection frameworks, e.g. Faster RCNN, so it has better performance, but the computational efficiency is reduced due to the computation of Region Proposal Network (RPN) and Non-Maximum Suppression (NMS). Based on the aforementioned reasons, we propose a fully transformer-based network (FTN) that can maintain a balance between computational efficiency and accuracy, which can be trained end-to-end and consists of three components: question module, image module, and fusion module. Meanwhile, the image module and the question module are visualized to explore the operating rules of the transformer. The experiment results demonstrate that the FTN can focus on key information and objects in the question module and the image module, and our single model has reached 69.01% accuracy on the VQA2.0 dataset, which is superior to the grid-based methods. Although FTN fails to surpass a few state-of-the-art bottom-up-based methods, the FTN has obvious advantages in computational efficiency. The code will be released at https://github.com/weixiang-xu/FTN-VQA.git.

Suggested Citation

  • Runmin Wang & Weixiang Xu & Yanbin Zhu & Zhenlin Zhu & Hua Chen & Yajun Ding & Jinping Liu & Changxin Gao & Nong Sang, 2023. "Ftn–Vqa: Multimodal Reasoning By Leveraging A Fully Transformer-Based Network For Visual Question Answering," FRACTALS (fractals), World Scientific Publishing Co. Pte. Ltd., vol. 31(06), pages 1-17.
  • Handle: RePEc:wsi:fracta:v:31:y:2023:i:06:n:s0218348x23401333
    DOI: 10.1142/S0218348X23401333
    as

    Download full text from publisher

    File URL: http://www.worldscientific.com/doi/abs/10.1142/S0218348X23401333
    Download Restriction: Access to full text is restricted to subscribers

    File URL: https://libkey.io/10.1142/S0218348X23401333?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    As the access to this document is restricted, you may want to search for a different version of it.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:wsi:fracta:v:31:y:2023:i:06:n:s0218348x23401333. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Tai Tone Lim (email available below). General contact details of provider: https://www.worldscientific.com/worldscinet/fractals .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.