Author
Listed:
- Maitreyee Wairagkar
(University of California, Davis)
- Nicholas S. Card
(University of California, Davis)
- Tyler Singer-Clark
(University of California, Davis
University of California, Davis)
- Xianda Hou
(University of California, Davis
University of California, Davis)
- Carrina Iacobacci
(University of California, Davis)
- Lee M. Miller
(University of California, Davis
University of California, Davis
University of California, Davis)
- Leigh R. Hochberg
(Brown University
VA Providence Healthcare
Harvard Medical School)
- David M. Brandman
(University of California, Davis)
- Sergey D. Stavisky
(University of California, Davis)
Abstract
Brain–computer interfaces (BCIs) have the potential to restore communication for people who have lost the ability to speak owing to a neurological disease or injury. BCIs have been used to translate the neural correlates of attempted speech into text1–3. However, text communication fails to capture the nuances of human speech, such as prosody and immediately hearing one’s own voice. Here we demonstrate a brain-to-voice neuroprosthesis that instantaneously synthesizes voice with closed-loop audio feedback by decoding neural activity from 256 microelectrodes implanted into the ventral precentral gyrus of a man with amyotrophic lateral sclerosis and severe dysarthria. We overcame the challenge of lacking ground-truth speech for training the neural decoder and were able to accurately synthesize his voice. Along with phonemic content, we were also able to decode paralinguistic features from intracortical activity, enabling the participant to modulate his BCI-synthesized voice in real time to change intonation and sing short melodies. These results demonstrate the feasibility of enabling people with paralysis to speak intelligibly and expressively through a BCI.
Suggested Citation
Maitreyee Wairagkar & Nicholas S. Card & Tyler Singer-Clark & Xianda Hou & Carrina Iacobacci & Lee M. Miller & Leigh R. Hochberg & David M. Brandman & Sergey D. Stavisky, 2025.
"An instantaneous voice-synthesis neuroprosthesis,"
Nature, Nature, vol. 644(8075), pages 145-152, August.
Handle:
RePEc:nat:nature:v:644:y:2025:i:8075:d:10.1038_s41586-025-09127-3
DOI: 10.1038/s41586-025-09127-3
Download full text from publisher
As the access to this document is restricted, you may want to
for a different version of it.
Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:nat:nature:v:644:y:2025:i:8075:d:10.1038_s41586-025-09127-3. See general information about how to correct material in RePEc.
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
We have no bibliographic references for this item. You can help adding them by using this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: Sonal Shukla or Springer Nature Abstracting and Indexing (email available below). General contact details of provider: http://www.nature.com .
Please note that corrections may take a couple of weeks to filter through
the various RePEc services.