IDEAS home Printed from https://ideas.repec.org/a/gam/jsusta/v14y2022i2p614-d719024.html
   My bibliography  Save this article

Prosodic Feature-Based Discriminatively Trained Low Resource Speech Recognition System

Author

Listed:
  • Taniya Hasija

    (Chitkara University Institute of Engineering & Technology, Chitkara University, Rajpura 140401, Punjab, India)

  • Virender Kadyan

    (Speech and Language Research Centre, School of Computer Science, University of Petroleum and Energy Studies, Dehradun 248007, India)

  • Kalpna Guleria

    (Chitkara University Institute of Engineering & Technology, Chitkara University, Rajpura 140401, Punjab, India)

  • Abdullah Alharbi

    (Department of Information Technology, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia)

  • Hashem Alyami

    (Department of Computer Science, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia)

  • Nitin Goyal

    (Chitkara University Institute of Engineering & Technology, Chitkara University, Rajpura 140401, Punjab, India)

Abstract

Speech recognition has been an active field of research in the last few decades since it facilitates better human–computer interaction. Native language automatic speech recognition (ASR) systems are still underdeveloped. Punjabi ASR systems are in their infancy stage because most research has been conducted only on adult speech systems; however, less work has been performed on Punjabi children’s ASR systems. This research aimed to build a prosodic feature-based automatic children speech recognition system using discriminative modeling techniques. The corpus of Punjabi children’s speech has various runtime challenges, such as acoustic variations with varying speakers’ ages. Efforts were made to implement out-domain data augmentation to overcome such issues using Tacotron-based text to a speech synthesizer. The prosodic features were extracted from Punjabi children’s speech corpus, then particular prosodic features were coupled with Mel Frequency Cepstral Coefficient (MFCC) features before being submitted to an ASR framework. The system modeling process investigated various approaches, which included Maximum Mutual Information (MMI), Boosted Maximum Mutual Information (bMMI), and feature-based Maximum Mutual Information (fMMI). The out-domain data augmentation was performed to enhance the corpus. After that, prosodic features were also extracted from the extended corpus, and experiments were conducted on both individual and integrated prosodic-based acoustic features. It was observed that the fMMI technique exhibited 20% to 25% relative improvement in word error rate compared with MMI and bMMI techniques. Further, it was enhanced using an augmented dataset and hybrid front-end features (MFCC + POV + Fo + Voice quality) with a relative improvement of 13% compared with the earlier baseline system.

Suggested Citation

  • Taniya Hasija & Virender Kadyan & Kalpna Guleria & Abdullah Alharbi & Hashem Alyami & Nitin Goyal, 2022. "Prosodic Feature-Based Discriminatively Trained Low Resource Speech Recognition System," Sustainability, MDPI, vol. 14(2), pages 1-22, January.
  • Handle: RePEc:gam:jsusta:v:14:y:2022:i:2:p:614-:d:719024
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2071-1050/14/2/614/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2071-1050/14/2/614/
    Download Restriction: no
    ---><---

    Citations

    Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
    as


    Cited by:

    1. Qiao Chen & Wenfeng Zhao & Qin Wang & Yawen Zhao, 2022. "The Sustainable Development of Intangible Cultural Heritage with AI: Cantonese Opera Singing Genre Classification Based on CoGCNet Model in China," Sustainability, MDPI, vol. 14(5), pages 1-20, March.

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jsusta:v:14:y:2022:i:2:p:614-:d:719024. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.