IDEAS home Printed from https://ideas.repec.org/a/gam/jftint/v13y2021i7p182-d595038.html
   My bibliography  Save this article

Multi-Angle Lipreading with Angle Classification-Based Feature Extraction and Its Application to Audio-Visual Speech Recognition

Author

Listed:
  • Shinnosuke Isobe

    (Graduate School of Natural Science and Technology, Gifu University, 1-1 Yanagido, Gifu 501-1193, Japan)

  • Satoshi Tamura

    (Faculty of Engineering, Gifu University, 1-1 Yanagido, Gifu 501-1193, Japan)

  • Satoru Hayamizu

    (Faculty of Engineering, Gifu University, 1-1 Yanagido, Gifu 501-1193, Japan)

  • Yuuto Gotoh

    (Ricoh Company, Ltd., 2-7-1 Izumi, Ebina, Kanagawa 243-0460, Japan)

  • Masaki Nose

    (Ricoh Company, Ltd., 2-7-1 Izumi, Ebina, Kanagawa 243-0460, Japan)

Abstract

Recently, automatic speech recognition (ASR) and visual speech recognition (VSR) have been widely researched owing to the development in deep learning. Most VSR research works focus only on frontal face images. However, assuming real scenes, it is obvious that a VSR system should correctly recognize spoken contents from not only frontal but also diagonal or profile faces. In this paper, we propose a novel VSR method that is applicable to faces taken at any angle. Firstly, view classification is carried out to estimate face angles. Based on the results, feature extraction is then conducted using the best combination of pre-trained feature extraction models. Next, lipreading is carried out using the features. We also developed audio-visual speech recognition (AVSR) using the VSR in addition to conventional ASR. Audio results were obtained from ASR, followed by incorporating audio and visual results in a decision fusion manner. We evaluated our methods using OuluVS2, a multi-angle audio-visual database. We then confirmed that our approach achieved the best performance among conventional VSR schemes in a phrase classification task. In addition, we found that our AVSR results are better than ASR and VSR results.

Suggested Citation

  • Shinnosuke Isobe & Satoshi Tamura & Satoru Hayamizu & Yuuto Gotoh & Masaki Nose, 2021. "Multi-Angle Lipreading with Angle Classification-Based Feature Extraction and Its Application to Audio-Visual Speech Recognition," Future Internet, MDPI, vol. 13(7), pages 1-12, July.
  • Handle: RePEc:gam:jftint:v:13:y:2021:i:7:p:182-:d:595038
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1999-5903/13/7/182/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1999-5903/13/7/182/
    Download Restriction: no
    ---><---

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jftint:v:13:y:2021:i:7:p:182-:d:595038. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.