Multi-Decoder DPRNN: High Accuracy Source Counting and Separation

24 Nov 2020  ·  Junzhe Zhu, Raymond Yeh, Mark Hasegawa-Johnson ·

We propose an end-to-end trainable approach to single-channel speech separation with unknown number of speakers. Our approach extends the MulCat source separation backbone with additional output heads: a count-head to infer the number of speakers, and decoder-heads for reconstructing the original signals. Beyond the model, we also propose a metric on how to evaluate source separation with variable number of speakers. Specifically, we cleared up the issue on how to evaluate the quality when the ground-truth hasmore or less speakers than the ones predicted by the model. We evaluate our approach on the WSJ0-mix datasets, with mixtures up to five speakers. We demonstrate that our approach outperforms state-of-the-art in counting the number of speakers and remains competitive in quality of reconstructed signals.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Speech Separation WSJ0-4mix Multi-Decoder DPRNN SI-SDRi 9.3 # 5
Speech Separation WSJ0-5mix Multi-Decoder DPRNN SI-SDRi 5.9 # 6

Methods


No methods listed for this paper. Add relevant methods here