IDEAS home Printed from https://ideas.repec.org/a/hin/complx/3188449.html
   My bibliography  Save this article

Vietnamese Sentiment Analysis under Limited Training Data Based on Deep Neural Networks

Author

Listed:
  • Huu-Thanh Duong
  • Tram-Anh Nguyen-Thi
  • Vinh Truong Hoang
  • Manman Yuan

Abstract

The annotated dataset is an essential requirement to develop an artificial intelligence (AI) system effectively and expect the generalization of the predictive models and to avoid overfitting. Lack of the training data is a big barrier so that AI systems can broaden in several domains which have no or missing training data. Building these datasets is a tedious and expensive task and depends on the domains and languages. This is especially a big challenge for low-resource languages. In this paper, we experiment and evaluate many various approaches on sentiment analysis problems so that they can still obtain high performances under limited training data. This paper uses the preprocessing techniques to clean and normalize the data and generate the new samples from the limited training dataset based on many text augmentation techniques such as lexicon substitution, sentence shuffling, back translation, syntax-tree transformation, and embedding mixup. Several experiments have been performed for both well-known machine learning-based classifiers and deep learning models. We compare, analyze, and evaluate the results to indicate the advantage and disadvantage points of the techniques for each approach. The experimental results show that the data augmentation techniques enhance the accuracy of the predictive models; this promises that smart systems can be applied widely in several domains under limited training data.

Suggested Citation

  • Huu-Thanh Duong & Tram-Anh Nguyen-Thi & Vinh Truong Hoang & Manman Yuan, 2022. "Vietnamese Sentiment Analysis under Limited Training Data Based on Deep Neural Networks," Complexity, Hindawi, vol. 2022, pages 1-14, June.
  • Handle: RePEc:hin:complx:3188449
    DOI: 10.1155/2022/3188449
    as

    Download full text from publisher

    File URL: http://downloads.hindawi.com/journals/complexity/2022/3188449.pdf
    Download Restriction: no

    File URL: http://downloads.hindawi.com/journals/complexity/2022/3188449.xml
    Download Restriction: no

    File URL: https://libkey.io/10.1155/2022/3188449?utm_source=ideas
    LibKey link: if access is restricted and if your library uses this service, LibKey will redirect you to where you can use your library subscription to access this item
    ---><---

    More about this item

    Statistics

    Access and download statistics

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:hin:complx:3188449. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Mohamed Abdelhakeem (email available below). General contact details of provider: https://www.hindawi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.