Search Results for author: Zhepei Wang

Found 10 papers, 5 papers with code

Learning Representations for New Sound Classes With Continual Self-Supervised Learning

no code implementations15 May 2022 Zhepei Wang, Cem Subakan, Xilin Jiang, Junkai Wu, Efthymios Tzinis, Mirco Ravanelli, Paris Smaragdis

In this paper, we present a self-supervised learning framework for continually learning representations for new sound classes.

Self-Supervised Learning

Separate but Together: Unsupervised Federated Learning for Speech Enhancement from Non-IID Data

1 code implementation11 May 2021 Efthymios Tzinis, Jonah Casebeer, Zhepei Wang, Paris Smaragdis

We propose FEDENHANCE, an unsupervised federated learning (FL) approach for speech enhancement and separation with non-IID distributed data across multiple clients.

Federated Learning Speech Enhancement +1

Compute and memory efficient universal sound source separation

1 code implementation3 Mar 2021 Efthymios Tzinis, Zhepei Wang, Xilin Jiang, Paris Smaragdis

Recent progress in audio source separation lead by deep learning has enabled many neural network models to provide robust solutions to this fundamental estimation problem.

Audio Source Separation Speech Separation

Semi-Supervised Singing Voice Separation with Noisy Self-Training

no code implementations16 Feb 2021 Zhepei Wang, Ritwik Giri, Umut Isik, Jean-Marc Valin, Arvindh Krishnaswamy

Given a limited set of labeled data, we present a method to leverage a large volume of unlabeled data to improve the model's performance.

Data Augmentation

EGO-Planner: An ESDF-free Gradient-based Local Planner for Quadrotors

2 code implementations20 Aug 2020 Xin Zhou, Zhepei Wang, Chao Xu, Fei Gao

Gradient-based planners are widely used for quadrotor local planning, in which a Euclidean Signed Distance Field (ESDF) is crucial for evaluating gradient magnitude and direction.

Robotics

Sudo rm -rf: Efficient Networks for Universal Audio Source Separation

2 code implementations14 Jul 2020 Efthymios Tzinis, Zhepei Wang, Paris Smaragdis

In this paper, we present an efficient neural network for end-to-end general purpose audio source separation.

Audio Source Separation Speech Separation

Two-Step Sound Source Separation: Training on Learned Latent Targets

2 code implementations22 Oct 2019 Efthymios Tzinis, Shrikant Venkataramani, Zhepei Wang, Cem Subakan, Paris Smaragdis

In the first step we learn a transform (and it's inverse) to a latent space where masking-based separation performance using oracles is optimal.

Speech Separation

Continual Learning of New Sound Classes using Generative Replay

no code implementations3 Jun 2019 Zhepei Wang, Cem Subakan, Efthymios Tzinis, Paris Smaragdis, Laurent Charlin

We show that by incrementally refining a classifier with generative replay a generator that is 4% of the size of all previous training data matches the performance of refining the classifier keeping 20% of all previous training data.

Continual Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.