IDEAS home Printed from https://ideas.repec.org/a/hin/jnlmpe/5483535.html
   My bibliography  Save this article

Adding Visual Information to Improve Multimodal Machine Translation for Low-Resource Language

Author

Listed:
  • Xiayang Shi
  • Zhenqiang Yu
  • Muhammad Haroon Yousaf

Abstract

Machine translation makes it easy for people to communicate across languages. Multimodal machine translation is also one of the important directions of research in machine translation, which uses feature information such as images and audio to assist translation models in obtaining higher quality target languages. However, in the vast majority of current research work has been conducted on the basis of commonly used corpora such as English, French, German, less research has been done on low-resource languages, and this has left the translation of low-resource languages relatively behind. This paper selects the English-Hindi and English-Hausa corpus, researched on low-resource language translation. The different models we use for image feature information extraction are fusion of image features with text information in the text encoding process of translation, using image features to provide additional information, and assisting the translation model for translation. Compared with text-only machine translation, the experimental results show that our method improves 3 BLEU in the English-Hindi dataset and improves 0.47 BLEU in the English-Hausa dataset. In addition, we also analyze the effect of image feature information extracted by different feature extraction models on the translation results. Different models pay different attention to each region of the image, and ResNet model is able to extract more feature information compared to VGG model, which is more effective for translation.

Suggested Citation

  • Xiayang Shi & Zhenqiang Yu & Muhammad Haroon Yousaf, 2022. "Adding Visual Information to Improve Multimodal Machine Translation for Low-Resource Language," Mathematical Problems in Engineering, Hindawi, vol. 2022, pages 1-9, August.
  • Handle: RePEc:hin:jnlmpe:5483535
    DOI: 10.1155/2022/5483535
    as

    Download full text from publisher

    File URL: http://downloads.hindawi.com/journals/mpe/2022/5483535.pdf
    Download Restriction: no

    File URL: http://downloads.hindawi.com/journals/mpe/2022/5483535.xml
    Download Restriction: no

    File URL: https://libkey.io/10.1155/2022/5483535?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:hin:jnlmpe:5483535. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Mohamed Abdelhakeem (email available below). General contact details of provider: https://www.hindawi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.