Search Results for author: Aswin Sivaraman

Found 7 papers, 2 papers with code

Zero-Shot Personalized Speech Enhancement through Speaker-Informed Model Selection

no code implementations8 May 2021 Aswin Sivaraman, Minje Kim

To this end, we propose using an ensemble model wherein each specialist module denoises noisy utterances from a distinct partition of training set speakers.

Denoising Model Selection +3

Efficient Personalized Speech Enhancement through Self-Supervised Learning

no code implementations5 Apr 2021 Aswin Sivaraman, Minje Kim

To this end, we pose personalization as either a zero-shot task, in which no additional clean speech of the target speaker is used for training, or a few-shot learning task, in which the goal is to minimize the duration of the clean speech used for transfer learning.

Few-Shot Learning Model Compression +3

Personalized Speech Enhancement through Self-Supervised Data Augmentation and Purification

no code implementations5 Apr 2021 Aswin Sivaraman, Sunwoo Kim, Minje Kim

Training personalized speech enhancement models is innately a no-shot learning problem due to privacy constraints and limited access to noise-free speech from the target user.

Data Augmentation Denoising +3

Detecting Extraneous Content in Podcasts

no code implementations EACL 2021 Sravana Reddy, Yongze Yu, Aasish Pappu, Aswin Sivaraman, Rezvaneh Rezapour, Rosie Jones

Podcast episodes often contain material extraneous to the main content, such as advertisements, interleaved within the audio and the written descriptions.

Music Information Retrieval

Self-Supervised Learning from Contrastive Mixtures for Personalized Speech Enhancement

1 code implementation6 Nov 2020 Aswin Sivaraman, Minje Kim

This work explores how self-supervised learning can be universally used to discover speaker-specific features towards enabling personalized speech enhancement models.

Contrastive Learning Few-Shot Learning +3

Sparse Mixture of Local Experts for Efficient Speech Enhancement

1 code implementation16 May 2020 Aswin Sivaraman, Minje Kim

In this paper, we investigate a deep learning approach for speech denoising through an efficient ensemble of specialist neural networks.

Speech Denoising Speech Enhancement

Deep Autotuner: A Data-Driven Approach to Natural-Sounding Pitch Correction for Singing Voice in Karaoke Performances

no code implementations3 Feb 2019 Sanna Wager, George Tzanetakis, Cheng-i Wang, Lijiang Guo, Aswin Sivaraman, Minje Kim

This approach differs from commercially used automatic pitch correction systems, where notes in the vocal tracks are shifted to be centered around notes in a user-defined score or mapped to the closest pitch among the twelve equal-tempered scale degrees.

Cannot find the paper you are looking for? You can Submit a new open access paper.