Speech Separation

52 papers with code • 7 benchmarks • 4 datasets

The task of extracting all overlapping speech sources in a given mixed speech signal refers to the Speech Separation. Speech Separation is a special scenario of source separation problem, where the focus is only on the overlapping speech signal sources and other interferences such as music or noise signals are not the main concern of the study.

Source: A Unified Framework for Speech Separation

Image credit: Speech Separation of A Target Speaker Based on Deep Neural Networks

Greatest papers with code

Attention is All You Need in Speech Separation

speechbrain/speechbrain 25 Oct 2020

Transformers are emerging as a natural alternative to standard RNNs, replacing recurrent computations with a multi-head attention mechanism.

Speech Separation

Conv-TasNet: Surpassing Ideal Time-Frequency Magnitude Masking for Speech Separation

facebookresearch/demucs 20 Sep 2018

The majority of the previous methods have formulated the separation problem through the time-frequency representation of the mixed signal, which has several drawbacks, including the decoupling of the phase and magnitude of the signal, the suboptimality of time-frequency representation for speech separation, and the long latency in calculating the spectrograms.

Multi-task Audio Source Seperation Music Source Separation +3

Dual-Path Transformer Network: Direct Context-Aware Modeling for End-to-End Monaural Speech Separation

mpariente/asteroid Interspeech 2020

By introduces a improved transformer, elements in speech sequences can interact directly, which enables DPTNet can model for the speech sequences with direct context-awareness.

Speech Separation Audio and Speech Processing Sound

Sudo rm -rf: Efficient Networks for Universal Audio Source Separation

mpariente/asteroid 14 Jul 2020

In this paper, we present an efficient neural network for end-to-end general purpose audio source separation.

Audio Source Separation Speech Separation

Filterbank design for end-to-end speech separation

mpariente/asteroid 23 Oct 2019

Also, we validate the use of parameterized filterbanks and show that complex-valued representations and masks are beneficial in all conditions.

Speaker Recognition Speech Separation

Two-Step Sound Source Separation: Training on Learned Latent Targets

mpariente/asteroid 22 Oct 2019

In the first step we learn a transform (and it's inverse) to a latent space where masking-based separation performance using oracles is optimal.

Speech Separation

Dual-path RNN: efficient long sequence modeling for time-domain single-channel speech separation

mpariente/asteroid 14 Oct 2019

Recent studies in deep learning-based speech separation have proven the superiority of time-domain approaches to conventional time-frequency-based methods.

Speech Separation

Real-time Single-channel Dereverberation and Separation with Time-domainAudio Separation Network

mpariente/asteroid ISCA Interspeech 2018

We investigate the recently proposed Time-domain Audio Sep-aration Network (TasNet) in the task of real-time single-channel speech dereverberation.

Denoising Speech Dereverberation +1

Alternative Objective Functions for Deep Clustering

mpariente/asteroid ICASSP 2018

The recently proposed deep clustering framework represents a significant step towards solv-ing the cocktail party problem.

Deep Clustering Speech Separation

TasNet: time-domain audio separation network for real-time, single-channel speech separation

mpariente/asteroid 1 Nov 2017

We directly model the signal in the time-domain using an encoder-decoder framework and perform the source separation on nonnegative encoder outputs.

Speech Separation