IDEAS home Printed from https://ideas.repec.org/a/gam/jftint/v12y2020i12p215-d452591.html
   My bibliography  Save this article

Keeping Models Consistent between Pretraining and Translation for Low-Resource Neural Machine Translation

Author

Listed:
  • Wenbo Zhang

    (Xinjiang Technical Institute of Physics & Chemistry, Chinese Academy of Sciences, Urumqi 830011, China
    University of Chinese Academy of Sciences, Beijing 100049, China
    Xinjiang Laboratory of Minority Speech and Language Information Processing, Urumqi 830011, China)

  • Xiao Li

    (Xinjiang Technical Institute of Physics & Chemistry, Chinese Academy of Sciences, Urumqi 830011, China
    University of Chinese Academy of Sciences, Beijing 100049, China
    Xinjiang Laboratory of Minority Speech and Language Information Processing, Urumqi 830011, China)

  • Yating Yang

    (Xinjiang Technical Institute of Physics & Chemistry, Chinese Academy of Sciences, Urumqi 830011, China
    University of Chinese Academy of Sciences, Beijing 100049, China
    Xinjiang Laboratory of Minority Speech and Language Information Processing, Urumqi 830011, China)

  • Rui Dong

    (Xinjiang Technical Institute of Physics & Chemistry, Chinese Academy of Sciences, Urumqi 830011, China
    University of Chinese Academy of Sciences, Beijing 100049, China
    Xinjiang Laboratory of Minority Speech and Language Information Processing, Urumqi 830011, China)

  • Gongxu Luo

    (Xinjiang Technical Institute of Physics & Chemistry, Chinese Academy of Sciences, Urumqi 830011, China
    University of Chinese Academy of Sciences, Beijing 100049, China
    Xinjiang Laboratory of Minority Speech and Language Information Processing, Urumqi 830011, China)

Abstract

Recently, the pretraining of models has been successfully applied to unsupervised and semi-supervised neural machine translation. A cross-lingual language model uses a pretrained masked language model to initialize the encoder and decoder of the translation model, which greatly improves the translation quality. However, because of a mismatch in the number of layers, the pretrained model can only initialize part of the decoder’s parameters. In this paper, we use a layer-wise coordination transformer and a consistent pretraining translation transformer instead of a vanilla transformer as the translation model. The former has only an encoder, and the latter has an encoder and a decoder, but the encoder and decoder have exactly the same parameters. Both models can guarantee that all parameters in the translation model can be initialized by the pretrained model. Experiments on the Chinese–English and English–German datasets show that compared with the vanilla transformer baseline, our models achieve better performance with fewer parameters when the parallel corpus is small.

Suggested Citation

  • Wenbo Zhang & Xiao Li & Yating Yang & Rui Dong & Gongxu Luo, 2020. "Keeping Models Consistent between Pretraining and Translation for Low-Resource Neural Machine Translation," Future Internet, MDPI, vol. 12(12), pages 1-13, November.
  • Handle: RePEc:gam:jftint:v:12:y:2020:i:12:p:215-:d:452591
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1999-5903/12/12/215/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1999-5903/12/12/215/
    Download Restriction: no
    ---><---

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jftint:v:12:y:2020:i:12:p:215-:d:452591. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.