IDEAS home Printed from https://ideas.repec.org/a/bla/jinfst/v72y2021i1p46-65.html
   My bibliography  Save this article

Cross‐modal retrieval with dual multi‐angle self‐attention

Author

Listed:
  • Wenjie Li
  • Yi Zheng
  • Yuejie Zhang
  • Rui Feng
  • Tao Zhang
  • Weiguo Fan

Abstract

In recent years, cross‐modal retrieval has been a popular research topic in both fields of computer vision and natural language processing. There is a huge semantic gap between different modalities on account of heterogeneous properties. How to establish the correlation among different modality data faces enormous challenges. In this work, we propose a novel end‐to‐end framework named Dual Multi‐Angle Self‐Attention (DMASA) for cross‐modal retrieval. Multiple self‐attention mechanisms are applied to extract fine‐grained features for both images and texts from different angles. We then integrate coarse‐grained and fine‐grained features into a multimodal embedding space, in which the similarity degrees between images and texts can be directly compared. Moreover, we propose a special multistage training strategy, in which the preceding stage can provide a good initial value for the succeeding stage and make our framework work better. Very promising experimental results over the state‐of‐the‐art methods can be achieved on three benchmark datasets of Flickr8k, Flickr30k, and MSCOCO.

Suggested Citation

  • Wenjie Li & Yi Zheng & Yuejie Zhang & Rui Feng & Tao Zhang & Weiguo Fan, 2021. "Cross‐modal retrieval with dual multi‐angle self‐attention," Journal of the Association for Information Science & Technology, Association for Information Science & Technology, vol. 72(1), pages 46-65, January.
  • Handle: RePEc:bla:jinfst:v:72:y:2021:i:1:p:46-65
    DOI: 10.1002/asi.24373
    as

    Download full text from publisher

    File URL: https://doi.org/10.1002/asi.24373
    Download Restriction: no

    File URL: https://libkey.io/10.1002/asi.24373?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    References listed on IDEAS

    as
    1. Brian C. O'Connor & Mary K. O'Connor & June M. Abbas, 1999. "User reactions as access mechanism: An exploration based on captions for images," Journal of the American Society for Information Science, Association for Information Science & Technology, vol. 50(8), pages 681-697.
    2. Andrew Large & Jamshid Beheshti & Alain Breuleux & Andre Renaud, 1995. "Multimedia and comprehension: The relationship among text, animation, and captions," Journal of the American Society for Information Science, Association for Information Science & Technology, vol. 46(5), pages 340-347, June.
    Full references (including those not matched with items on IDEAS)

    Most related items

    These are the items that most often cite the same works as this one and are cited by the same works as this one.

      More about this item

      Statistics

      Access and download statistics

      Corrections

      All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:bla:jinfst:v:72:y:2021:i:1:p:46-65. See general information about how to correct material in RePEc.

      If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

      If CitEc recognized a bibliographic reference but did not link an item in RePEc to it, you can help with this form .

      If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

      For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Wiley Content Delivery (email available below). General contact details of provider: http://www.asis.org .

      Please note that corrections may take a couple of weeks to filter through the various RePEc services.

      IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.