Speech Separation

72 papers with code • 12 benchmarks • 12 datasets

The task of extracting all overlapping speech sources in a given mixed speech signal refers to the Speech Separation. Speech Separation is a special scenario of source separation problem, where the focus is only on the overlapping speech signal sources and other interferences such as music or noise signals are not the main concern of the study.

Source: A Unified Framework for Speech Separation

Image credit: Speech Separation of A Target Speaker Based on Deep Neural Networks

Libraries

Use these libraries to find Speech Separation models and implementations
10 papers
1,475
2 papers
5,373
2 papers
165

Most implemented papers

Conv-TasNet: Surpassing Ideal Time-Frequency Magnitude Masking for Speech Separation

naplab/Conv-TasNet 20 Sep 2018

The majority of the previous methods have formulated the separation problem through the time-frequency representation of the mixed signal, which has several drawbacks, including the decoupling of the phase and magnitude of the signal, the suboptimality of time-frequency representation for speech separation, and the long latency in calculating the spectrograms.

Deep clustering: Discriminative embeddings for segmentation and separation

mpariente/asteroid 18 Aug 2015

The framework can be used without class labels, and therefore has the potential to be trained on a diverse set of sound types, and to generalize to novel sources.

Dual-path RNN: efficient long sequence modeling for time-domain single-channel speech separation

mpariente/asteroid 14 Oct 2019

Recent studies in deep learning-based speech separation have proven the superiority of time-domain approaches to conventional time-frequency-based methods.

Looking to Listen at the Cocktail Party: A Speaker-Independent Audio-Visual Model for Speech Separation

bill9800/speech_separation 10 Apr 2018

Solving this task using only audio as input is extremely challenging and does not provide an association of the separated speech signals with speakers in the video.

Dual-Path Transformer Network: Direct Context-Aware Modeling for End-to-End Monaural Speech Separation

ujscjj/DPTNet Interspeech 2020

By introduces a improved transformer, elements in speech sequences can interact directly, which enables DPTNet can model for the speech sequences with direct context-awareness.

Multi-talker Speech Separation with Utterance-level Permutation Invariant Training of Deep Recurrent Neural Networks

snsun/pit-speech-separation 18 Mar 2017

We evaluated uPIT on the WSJ0 and Danish two- and three-talker mixed-speech separation tasks and found that uPIT outperforms techniques based on Non-negative Matrix Factorization (NMF) and Computational Auditory Scene Analysis (CASA), and compares favorably with Deep Clustering (DPCL) and the Deep Attractor Network (DANet).

Attention is All You Need in Speech Separation

speechbrain/speechbrain 25 Oct 2020

Transformers are emerging as a natural alternative to standard RNNs, replacing recurrent computations with a multi-head attention mechanism.

Joint Optimization of Masks and Deep Recurrent Neural Networks for Monaural Source Separation

bill9800/speech_separation 13 Feb 2015

In this paper, we explore joint optimization of masking functions and deep recurrent neural networks for monaural source separation tasks, including monaural speech separation, monaural singing voice separation, and speech denoising.

Single-Channel Multi-Speaker Separation using Deep Clustering

JusperLee/Deep-Clustering-for-Speech-Separation 7 Jul 2016

In this paper we extend the baseline system with an end-to-end signal approximation objective that greatly improves performance on a challenging speech separation.

TasNet: time-domain audio separation network for real-time, single-channel speech separation

mpariente/asteroid 1 Nov 2017

We directly model the signal in the time-domain using an encoder-decoder framework and perform the source separation on nonnegative encoder outputs.