Face Landmark-based Speaker-Independent Audio-Visual Speech Enhancement in Multi-Talker Environments

In this paper, we address the problem of enhancing the speech of a speaker of interest in a cocktail party scenario when visual information of the speaker of interest is available. Contrary to most previous studies, we do not learn visual features on the typically small audio-visual datasets, but use an already available face landmark detector (trained on a separate image dataset). The landmarks are used by LSTM-based models to generate time-frequency masks which are applied to the acoustic mixed-speech spectrogram. Results show that: (i) landmark motion features are very effective features for this task, (ii) similarly to previous work, reconstruction of the target speaker's spectrogram mediated by masking is significantly more accurate than direct spectrogram reconstruction, and (iii) the best masks depend on both motion landmark features and the input mixed-speech spectrogram. To the best of our knowledge, our proposed models are the first models trained and evaluated on the limited size GRID and TCD-TIMIT datasets, that achieve speaker-independent speech enhancement in a multi-talker setting.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Speech Enhancement GRID corpus (mixed-speech) Audio-Visual concat-ref PESQ 2.70 # 1
Speech Separation GRID corpus (mixed-speech) Audio-Visual concat-ref SDR 8.05 # 1
Speech Enhancement TCD-TIMIT corpus (mixed-speech) Audio-Visual concat-ref PESQ 3.03 # 1
Speech Separation TCD-TIMIT corpus (mixed-speech) Audio-Visual concat-ref SDR 10.55 # 1


No methods listed for this paper. Add relevant methods here