Robust Speaker Extraction Network Based on Iterative Refined Adaptation

4 Nov 2020  ·  Chengyun Deng, Shiqian Ma, Yi Zhang, Yongtao Sha, HUI ZHANG, Hui Song, Xiangang Li ·

Speaker extraction aims to extract target speech signal from a multi-talker environment with interference speakers and surrounding noise, given the target speaker's reference information. Most speaker extraction systems achieve satisfactory performance on the premise that the test speakers have been encountered during training time. Such systems suffer from performance degradation given unseen target speakers and/or mismatched reference voiceprint information. In this paper we propose a novel strategy named Iterative Refined Adaptation (IRA) to improve the robustness and generalization capability of speaker extraction systems in the aforementioned scenarios. Given an initial speaker embedding encoded by an auxiliary network, the extraction network can obtain a latent representation of the target speaker, which is fed back to the auxiliary network to get a refined embedding to provide more accurate guidance for the extraction network. Experiments on WSJ0-2mix-extr and WHAM! dataset confirm the superior performance of the proposed method over the network without IRA in terms of SI-SDR and PESQ improvement.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here