Seeing Voices in Noise: A Study of Audiovisual-Enhanced Vocoded Speech Intelligibility in Cochlear Implant Simulation

26 Sep 2019  ·  Rung-Yu Tseng, Tao-Wei Wang, Szu-Wei Fu, Yu Tsao, Chia-Ying Lee ·

Speech perception is a key to verbal communication. For people with hearing loss, the capability to recognize speech is restricted, particularly in the noisy environment. This study aimed to understand the improvement for vocoded speech intelligibility in cochlear implant (CI) simulation through two potential methods: Speech Enhancement (SE) and Audiovisual Integration. A fully convolutional neural network (FCN) using an intelligibility-oriented objective function was recently proposed and proven to effectively facilitate the speech intelligibility as an advanced SE approach. Furthermore, the audiovisual integration is reported to supply better speech comprehension compared to audio-only information. An experiment was designed to test speech intelligibility using tone-vocoded speech in CI simulation with a group of normal-hearing listeners. Experimental results confirmed the effectiveness of the FCN-based SE and audiovisual integration and positively recommended these two methods becoming a blended feature in a CI processor to increase the speech intelligibility for CI users under noisy conditions.

PDF Abstract
No code implementations yet. Submit your code now

Categories


Sound Audio and Speech Processing

Datasets


  Add Datasets introduced or used in this paper