Search Results for author: Cem Subakan

Found 11 papers, 6 papers with code

Learning Representations for New Sound Classes With Continual Self-Supervised Learning

no code implementations15 May 2022 Zhepei Wang, Cem Subakan, Xilin Jiang, Junkai Wu, Efthymios Tzinis, Mirco Ravanelli, Paris Smaragdis

In this paper, we present a self-supervised learning framework for continually learning representations for new sound classes.

Self-Supervised Learning

On Using Transformers for Speech-Separation

1 code implementation6 Feb 2022 Cem Subakan, Mirco Ravanelli, Samuele Cornell, Francois Grondin, Mirko Bronzi

In this paper, we extend our previous work by providing results on more datasets including LibriMix, and WHAM!, WHAMR!

Denoising Speech Enhancement +1

REAL-M: Towards Speech Separation on Real Mixtures

1 code implementation20 Oct 2021 Cem Subakan, Mirco Ravanelli, Samuele Cornell, François Grondin

First, we release the REAL-M dataset, a crowd-sourced corpus of real-life mixtures.

Speech Separation

Attention is All You Need in Speech Separation

3 code implementations25 Oct 2020 Cem Subakan, Mirco Ravanelli, Samuele Cornell, Mirko Bronzi, Jianyuan Zhong

Transformers are emerging as a natural alternative to standard RNNs, replacing recurrent computations with a multi-head attention mechanism.

Speech Separation

Two-Step Sound Source Separation: Training on Learned Latent Targets

2 code implementations22 Oct 2019 Efthymios Tzinis, Shrikant Venkataramani, Zhepei Wang, Cem Subakan, Paris Smaragdis

In the first step we learn a transform (and it's inverse) to a latent space where masking-based separation performance using oracles is optimal.

Speech Separation

Continual Learning of New Sound Classes using Generative Replay

no code implementations3 Jun 2019 Zhepei Wang, Cem Subakan, Efthymios Tzinis, Paris Smaragdis, Laurent Charlin

We show that by incrementally refining a classifier with generative replay a generator that is 4% of the size of all previous training data matches the performance of refining the classifier keeping 20% of all previous training data.

Continual Learning

Learning the Base Distribution in Implicit Generative Models

no code implementations12 Mar 2018 Cem Subakan, Oluwasanmi Koyejo, Paris Smaragdis

Popular generative model learning methods such as Generative Adversarial Networks (GANs), and Variational Autoencoders (VAE) enforce the latent representation to follow simple distributions such as isotropic Gaussian.

Generative Adversarial Source Separation

1 code implementation30 Oct 2017 Cem Subakan, Paris Smaragdis

Generative source separation methods such as non-negative matrix factorization (NMF) or auto-encoders, rely on the assumption of an output probability density.

Spectral Learning of Mixture of Hidden Markov Models

no code implementations NeurIPS 2014 Cem Subakan, Johannes Traa, Paris Smaragdis

In this paper, we propose a learning approach for the Mixture of Hidden Markov Models (MHMM) based on the Method of Moments (MoM).

Cannot find the paper you are looking for? You can Submit a new open access paper.