no code implementations • 31 Jul 2023 • Mohammad Jalilpour Monesi, Jonas Vanthornhout, Hugo Van hamme, Tom Francart
Our results show that vowel-consonant onsets outperform onsets of any phone in both tasks, which suggests that neural tracking of the vowel vs. consonant exists in the EEG to some degree.
no code implementations • 3 Feb 2023 • Corentin Puffay, Bernd Accou, Lies Bollens, Mohammad Jalilpour Monesi, Jonas Vanthornhout, Hugo Van hamme, Tom Francart
Linear models are presently used to relate the EEG recording to the corresponding speech signal.
2 code implementations • 17 Jun 2021 • Mohammad Jalilpour Monesi, Bernd Accou, Tom Francart, Hugo Van hamme
Decoding the speech signal that a person is listening to from the human brain via electroencephalography (EEG) can help us understand how our auditory system works.
no code implementations • 14 May 2021 • Bernd Accou, Mohammad Jalilpour Monesi, Hugo Van hamme, Tom Francart
The accuracy of the model's match/mismatch predictions can be used as a proxy for speech intelligibility without subject-specific (re)training.