Speakerfilter-Pro: an improved target speaker extractor combines the time domain and frequency domain

25 Oct 2020  ·  Shulin He, Hao Li, Xueliang Zhang ·

This paper introduces an improved target speaker extractor, referred to as Speakerfilter-Pro, based on our previous Speakerfilter model. The Speakerfilter uses a bi-direction gated recurrent unit (BGRU) module to characterize the target speaker from anchor speech and use a convolutional recurrent network (CRN) module to separate the target speech from a noisy signal.Different from the Speakerfilter, the Speakerfilter-Pro sticks a WaveUNet module in the beginning and the ending, respectively. The WaveUNet has been proven to have a better ability to perform speech separation in the time domain. In order to extract the target speaker information better, the complex spectrum instead of the magnitude spectrum is utilized as the input feature for the CRN module. Experiments are conducted on the two-speaker dataset (WSJ0-mix2) which is widely used for speaker extraction. The systematic evaluation shows that the Speakerfilter-Pro outperforms the Speakerfilter and other baselines, and achieves a signal-to-distortion ratio (SDR) of 14.95 dB.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods