Author
Listed:
- A. P. Yeshwanth Balaji
(Amrita Vishwa Vidyapeetham)
- T. R. Eshwanth Karti
(Amrita Vishwa Vidyapeetham)
- K. Nithish Ariyha
(Amrita Vishwa Vidyapeetham)
- J. Vikash
(Amrita Vishwa Vidyapeetham)
- G. Jyothish Lal
(Amrita Vishwa Vidyapeetham)
Abstract
Dysarthric speech poses significant challenges to modern speech processing systems due to its inherently low intelligibility, irregular prosody, and atypical articulation patterns. Traditional methods that rely on intermediate automatic speech recognition (ASR) stages often perform poorly under such conditions, especially when speech is severely degraded. In this work, we propose a fully end-to-end enhancement pipeline that directly improves dysarthric speech quality and intelligibility using GAN-based models, bypassing the limitations of transcription-based systems. We employ a MelSEGAN architecture coupled with a SepFormer to address spectral and temporal distortions in the speech signal. Through a comparative analysis of preprocessing strategies, we find that dynamic time warping (DTW) in conjunction with variational mode decomposition (VMD) offers more stable and intelligible outputs than conventional voice activity detection (VAD), particularly in cases of temporally misaligned or fragmented speech. DTW not only enables better convergence during training but also results in clearer formant structures and reduced background artifacts in the enhanced speech. Further, we extend our pipeline with Model-Agnostic Meta-Learning (MAML) to improve speaker-specific adaptation. The MAML-augmented models demonstrate superior generalization and refinement of harmonic features, especially when paired with DTW-based preprocessing. Additionally, we are investigating an alternative enhancement path that combines a UNet-based encoder-decoder with a HiFi-GAN vocoder. Early qualitative assessments suggest that this hybrid model produces higher naturalness and improved intelligibility, offering a promising direction for future development. Overall, our findings highlight the importance of robust temporal preprocessing and adaptive learning strategies in building effective enhancement systems for disordered speech scenarios.
Suggested Citation
A. P. Yeshwanth Balaji & T. R. Eshwanth Karti & K. Nithish Ariyha & J. Vikash & G. Jyothish Lal, 2025.
"Enhancing Dysarthric Speech for Improved Clinical Communication: A Deep Learning Approach,"
Springer Series in Reliability Engineering,,
Springer.
Handle:
RePEc:spr:ssrchp:978-3-031-98728-1_1
DOI: 10.1007/978-3-031-98728-1_1
Download full text from publisher
To our knowledge, this item is not available for
download. To find whether it is available, there are three
options:
1. Check below whether another version of this item is available online.
2. Check on the provider's
web page
whether it is in fact available.
3. Perform a
for a similarly titled item that would be
available.
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:ssrchp:978-3-031-98728-1_1. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.springer.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.