Search Results for author: Arvindh Krishnaswamy

Found 15 papers, 3 papers with code

Real-Time Packet Loss Concealment With Mixed Generative and Predictive Model

1 code implementation11 May 2022 Jean-Marc Valin, Ahmed Mustafa, Christopher Montgomery, Timothy B. Terriberry, Michael Klingbeil, Paris Smaragdis, Arvindh Krishnaswamy

As deep speech enhancement algorithms have recently demonstrated capabilities greatly surpassing their traditional counterparts for suppressing noise, reverberation and echo, attention is turning to the problem of packet loss concealment (PLC).

Speech Enhancement Speech Synthesis

End-to-end LPCNet: A Neural Vocoder With Fully-Differentiable LPC Estimation

1 code implementation23 Feb 2022 Krishna Subramani, Jean-Marc Valin, Umut Isik, Paris Smaragdis, Arvindh Krishnaswamy

Neural vocoders have recently demonstrated high quality speech synthesis, but typically require a high computational complexity.

Speech Synthesis

Neural Speech Synthesis on a Shoestring: Improving the Efficiency of LPCNet

1 code implementation22 Feb 2022 Jean-Marc Valin, Umut Isik, Paris Smaragdis, Arvindh Krishnaswamy

Neural speech synthesis models can synthesize high quality speech but typically require a high computational complexity to do so.

Speech Synthesis

Robust Audio Anomaly Detection

no code implementations3 Feb 2022 Wo Jae Lee, Karim Helwani, Arvindh Krishnaswamy, Srikanth Tenneti

The presented approach doesn't assume the presence of labeled anomalies in the training dataset and uses a novel deep neural network architecture to learn the temporal dynamics of the multivariate time series at multiple resolutions while being robust to contaminations in the training dataset.

Anomaly Detection Time Series

Personalized PercepNet: Real-time, Low-complexity Target Voice Separation and Enhancement

no code implementations8 Jun 2021 Ritwik Giri, Shrikant Venkataramani, Jean-Marc Valin, Umut Isik, Arvindh Krishnaswamy

The presence of multiple talkers in the surrounding environment poses a difficult challenge for real-time speech communication systems considering the constraints on network size and complexity.

Semi-Supervised Singing Voice Separation with Noisy Self-Training

no code implementations16 Feb 2021 Zhepei Wang, Ritwik Giri, Umut Isik, Jean-Marc Valin, Arvindh Krishnaswamy

Given a limited set of labeled data, we present a method to leverage a large volume of unlabeled data to improve the model's performance.

Data Augmentation

Enhancing into the codec: Noise Robust Speech Coding with Vector-Quantized Autoencoders

no code implementations12 Feb 2021 Jonah Casebeer, Vinjai Vale, Umut Isik, Jean-Marc Valin, Ritwik Giri, Arvindh Krishnaswamy

Audio codecs based on discretized neural autoencoders have recently been developed and shown to provide significantly higher compression levels for comparable quality speech output.

Enhancing Audio Augmentation Methods with Consistency Learning

no code implementations9 Feb 2021 Turab Iqbal, Karim Helwani, Arvindh Krishnaswamy, Wenwu Wang

For tasks such as classification, there is a good case for learning representations of the data that are invariant to such transformations, yet this is not explicitly enforced by classification losses such as the cross-entropy loss.

Audio Classification Audio Tagging +3

PoCoNet: Better Speech Enhancement with Frequency-Positional Embeddings, Semi-Supervised Conversational Data, and Biased Loss

no code implementations11 Aug 2020 Umut Isik, Ritwik Giri, Neerad Phansalkar, Jean-Marc Valin, Karim Helwani, Arvindh Krishnaswamy

Neural network applications generally benefit from larger-sized models, but for current speech enhancement models, larger scale networks often suffer from decreased robustness to the variety of real-world use cases beyond what is encountered in training data.

Speech Enhancement

Efficient Trainable Front-Ends for Neural Speech Enhancement

no code implementations20 Feb 2020 Jonah Casebeer, Umut Isik, Shrikant Venkataramani, Arvindh Krishnaswamy

Many neural speech enhancement and source separation systems operate in the time-frequency domain.

Speech Enhancement

Cannot find the paper you are looking for? You can Submit a new open access paper.