IDEAS home Printed from https://ideas.repec.org/a/gam/jftint/v16y2024i5p151-d1384838.html
   My bibliography  Save this article

Effective Monoaural Speech Separation through Convolutional Top-Down Multi-View Network

Author

Listed:
  • Aye Nyein Aung

    (Department of Electrical Engineering, National Chi Nan University, Nantou 545, Taiwan)

  • Che-Wei Liao

    (Department of Electrical Engineering, National Chi Nan University, Nantou 545, Taiwan)

  • Jeih-Weih Hung

    (Department of Electrical Engineering, National Chi Nan University, Nantou 545, Taiwan)

Abstract

Speech separation, sometimes known as the “cocktail party problem”, is the process of separating individual speech signals from an audio mixture that includes ambient noises and several speakers. The goal is to extract the target speech in this complicated sound scenario and either make it easier to understand or increase its quality so that it may be used in subsequent processing. Speech separation on overlapping audio data is important for many speech-processing tasks, including natural language processing, automatic speech recognition, and intelligent personal assistants. New speech separation algorithms are often built on a deep neural network (DNN) structure, which seeks to learn the complex relationship between the speech mixture and any specific speech source of interest. DNN-based speech separation algorithms outperform conventional statistics-based methods, although they typically need a lot of processing and/or a larger model size. This study presents a new end-to-end speech separation network called ESC-MASD-Net (effective speaker separation through convolutional multi-view attention and SuDoRM-RF network), which has relatively fewer model parameters compared with the state-of-the-art speech separation architectures. The network is partly inspired by the SuDoRM-RF++ network, which uses multiple time-resolution features with downsampling and resampling for effective speech separation. ESC-MASD-Net incorporates the multi-view attention and residual conformer modules into SuDoRM-RF++. Additionally, the U-Convolutional block in ESC-MASD-Net is refined with a conformer layer. Experiments conducted on the WHAM! dataset show that ESC-MASD-Net outperforms SuDoRM-RF++ significantly in the SI-SDRi metric. Furthermore, the use of the conformer layer has also improved the performance of ESC-MASD-Net.

Suggested Citation

  • Aye Nyein Aung & Che-Wei Liao & Jeih-Weih Hung, 2024. "Effective Monoaural Speech Separation through Convolutional Top-Down Multi-View Network," Future Internet, MDPI, vol. 16(5), pages 1-16, April.
  • Handle: RePEc:gam:jftint:v:16:y:2024:i:5:p:151-:d:1384838
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/1999-5903/16/5/151/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/1999-5903/16/5/151/
    Download Restriction: no
    ---><---

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jftint:v:16:y:2024:i:5:p:151-:d:1384838. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.