Exploring the time-domain deep attractor network with two-stream architectures in a reverberant environment

1 Jul 2020  ·  Hangting Chen, Pengyuan Zhang ·

Deep attractor networks (DANs) perform speech separation with discriminative embeddings and speaker attractors. Compared with methods based on the permutation invariant training (PIT), DANs define a deep embedding space and deliver a more elaborate representation on each time-frequency (T-F) bin. However, it has been observed that the DANs achieve limited improvement on the signal quality if directly deployed in a reverberant environment. Following the success of time-domain separation networks on the clean mixture speech, we propose a time-domain DAN (TD-DAN) with two-streams of convolutional networks, which efficiently perform both dereverberation and separation tasks under the condition of a variable number of speakers. The speaker encoding stream (SES) of the TD-DAN is trained to model the speaker information in the embedding space. The speech decoding stream (SDS) accepts speaker attractors from the SES and learns to estimate early reflections from the spectro-temporal representations. Meanwhile, additional clustering losses are used to bridge the gap between the oracle and the estimated attractors. Experiments were conducted on the Spatialized Multi-Speaker Wall Street Journal (SMS-WSJ) dataset. The early reflection was compared with the anechoic and reverberant signals and then was chosen as the learning targets. The experimental results demonstrated that the TD-DAN achieved scale-invariant source-to-distortion ratio (SI-SDR) gains of 9.79/7.47 dB on the reverberant 2/3-speaker evaluation set, exceeding the baseline DAN and convolutional time-domain audio separation network (Conv-TasNet) by 1.92/0.68 dB and 0.91/0.47 dB, respectively.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here