Author
Listed:
- Zhao, Lu
- Song, Xuding
- Czap, László
Abstract
This paper presents a comprehensive virtual animated articulation model based on the dominance concept of the Shaanxi Xi'an dialect, designed for integration into Speech Assistant Systems (SA) to support deaf-mute students and individuals learning this dialect. A detailed multimodal database has been developed, capturing the three-dimensional movements of the tongue, lips, and jaw for both vowels and consonants, with precise temporal resolution. The model not only represents isolated phonemes but also systematically incorporates the interactions between consecutive phonemes, articulatory configurations, and dynamic timing patterns, enabling realistic simulation of natural speech production. Advanced algorithms are employed to synchronize articulatory movements with speech sounds, ensuring real-time animation that accurately reflects subtle variations in pronunciation and coarticulation effects. This approach allows learners to visualize the complete articulatory process, providing an intuitive and interactive tool for language acquisition, speech training, and therapy applications. Beyond the immediate educational context, the methodology offers a scalable framework for extending real-time articulation animation to other languages, supporting cross-linguistic studies of phonetic articulation, enhancing human-computer interaction, and improving accessibility in speech-based systems. The proposed model thus combines linguistic theory, computational modeling, and animation techniques to deliver both practical educational tools and a foundation for further research in speech visualization technologies.
Suggested Citation
Zhao, Lu & Song, Xuding & Czap, László, 2025.
"Facial Animation Design of Chinese Shaanxi Xi'an Dialect Based on Dominance Model in Speech Assistant System,"
GBP Proceedings Series, Scientific Open Access Publishing, vol. 17, pages 17-24.
Handle:
RePEc:axf:gbppsa:v:17:y:2025:i::p:17-24
Download full text from publisher
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:axf:gbppsa:v:17:y:2025:i::p:17-24. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Yuchi Liu (email available below). General contact details of provider: https://soapubs.com/index.php/GBPPS .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.