Translating neural signals to text using a Brain-Computer Interface

Brain-Computer Interfaces (BCI) may help patients with faltering communication abilities due to neurodegenerative diseases produce text or speech by direct neural processing. However, their practical realization has proven difficult due to limitations in speed, accuracy, and generalizability of existing interfaces. To this end, we aim to create a BCI that decodes text directly from neural signals. We implement a framework that initially isolates frequency bands in the input signal encapsulating differential information regarding production of various phonemic classes. These bands form a feature set that feeds into an LSTM which discerns at each time point probability distributions across all phonemes uttered by a subject. Finally, a particle filtering algorithm temporally smooths these probabilities incorporating prior knowledge of the English language to output text corresponding to the decoded word. Further, in producing an output, we abstain from constraining the reconstructed word to be from a given bag-of-words, unlike previous studies. The empirical success of our proposed approach, offers promise for the employment of such an interface by patients in unfettered, naturalistic environments.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here