Search Results for author: Shrikant Venkataramani

Found 10 papers, 4 papers with code

Personalized PercepNet: Real-time, Low-complexity Target Voice Separation and Enhancement

no code implementations8 Jun 2021 Ritwik Giri, Shrikant Venkataramani, Jean-Marc Valin, Umut Isik, Arvindh Krishnaswamy

The presence of multiple talkers in the surrounding environment poses a difficult challenge for real-time speech communication systems considering the constraints on network size and complexity.

Self-supervised Learning for Speech Enhancement

1 code implementation18 Jun 2020 Yu-Che Wang, Shrikant Venkataramani, Paris Smaragdis

Supervised learning for single-channel speech enhancement requires carefully labeled training examples where the noisy mixture is input into the network and the network is trained to produce an output close to the ideal target.

Audio and Speech Processing Sound

Efficient Trainable Front-Ends for Neural Speech Enhancement

no code implementations20 Feb 2020 Jonah Casebeer, Umut Isik, Shrikant Venkataramani, Arvindh Krishnaswamy

Many neural speech enhancement and source separation systems operate in the time-frequency domain.

Speech Enhancement

Two-Step Sound Source Separation: Training on Learned Latent Targets

2 code implementations22 Oct 2019 Efthymios Tzinis, Shrikant Venkataramani, Zhepei Wang, Cem Subakan, Paris Smaragdis

In the first step we learn a transform (and it's inverse) to a latent space where masking-based separation performance using oracles is optimal.

Speech Separation Vocal Bursts Valence Prediction

Class-conditional embeddings for music source separation

no code implementations7 Nov 2018 Prem Seetharaman, Gordon Wichern, Shrikant Venkataramani, Jonathan Le Roux

Isolating individual instruments in a musical mixture has a myriad of potential applications, and seems imminently achievable given the levels of performance reached by recent deep learning methods.

Clustering Deep Clustering +1

Unsupervised Deep Clustering for Source Separation: Direct Learning from Mixtures using Spatial Information

1 code implementation5 Nov 2018 Efthymios Tzinis, Shrikant Venkataramani, Paris Smaragdis

We present a monophonic source separation system that is trained by only observing mixtures with no ground truth separation information.

Clustering Deep Clustering +2

End-to-end Networks for Supervised Single-channel Speech Separation

no code implementations5 Oct 2018 Shrikant Venkataramani, Paris Smaragdis

The performance of single channel source separation algorithms has improved greatly in recent times with the development and deployment of neural networks.

Speech Separation

End-to-end Source Separation with Adaptive Front-Ends

1 code implementation6 May 2017 Shrikant Venkataramani, Jonah Casebeer, Paris Smaragdis

We present an auto-encoder neural network that can act as an equivalent to short-time front-end transforms.

Sound

Cannot find the paper you are looking for? You can Submit a new open access paper.