Nothing's in my cart
2-minute read
We have already introduced numerous applications of Brain-Computer Interfaces (BCI), from using this technology to play games, control virtual avatars, to Elon Musk's controversial company Neuralink, all of which reflect people's imagination about 'full dive' and 'mind control'. However, the original intention and core application of developing BCIs have always been in the medical field. It aims to break through physical limitations and regain the possibility of communication for patients who are unable to move or speak for various reasons through the 'thinking to communicate' method.
The research team hopes to help more ALS patients. (Source: UC Davis)
A new BCI technology developed by UC Davis Health has successfully solved the communication barriers faced by ALS (Amyotrophic Lateral Sclerosis) patients suffering from the loss of speech. By implanting sensors in the patient's brain that can instantly interpret brainwave signals and combining them with AI to convert these signals into customized synthetic speech, ALS patients can 'speak' again, with an astonishing accuracy of 97%. How is this achieved?
The subject of this experiment is a 45-year-old male, Casey Harrell, whose life has been drastically changed by ALS. As the disease progressed, Casey not only found it difficult to move his limbs but also gradually lost the ability to speak.
In this experiment, the medical and academic-purpose BCI system BrainGate2 was used. Researchers implanted four sensors in the left precentral gyrus of Casey's brain to record brain activity from 256 cortical electrodes. Then, through AI analysis, these brainwave signals were transformed into phonemes (basic units of speech), which were then organized into complete words.
With BCI, there's no need for traditional keyboard input. (Source: UC Davis)
To truly enable Casey to 'speak' again, the research team used voice samples from before he became ill to train an AI to generate a synthetic voice that closely resembles the original speech, allowing the output text to be 'spoken' instantly.
In the initial phase of voice data training, the system learned 50 words in just 30 minutes with an accuracy of 99.6%. In the second training phase, the system expanded its vocabulary to 125,000 words in just 1.4 hours, achieving a word accuracy of 90.2%. With ongoing data collection, it has reached a high accuracy rate of 97.5%, making it the most precise among similar systems to date.
Well-known physicist Stephen Hawking 'spoke' using Intel's Assistive Context-Aware Toolkit (ACAT), which uses muscle detection and prediction systems to help ALS patients intuitively type sentences, then output through speech synthesis. Even though Intel released the latest ACAT 3.0 in March, combining non-invasive BCI, it still requires users to input their desired speech through a virtual keyboard. Although the implanted version has relatively higher risks and costs, it offers patients a more liberating communication option.
In this video, Casey communicates his emotions through the device. After being equipped with the brain-machine interface, he was able to return to society and perform his job with fewer physical limitations. Casey's story is just the beginning, and as BCI technology gradually becomes more widespread and risks are reduced, more ALS patients will be able to regain their voices in the future.