Search Results for author: Nima Mesgarani

Found 29 papers, 16 papers with code

Listen, Chat, and Edit: Text-Guided Soundscape Modification for Enhanced Auditory Experience

no code implementations6 Feb 2024 Xilin Jiang, Cong Han, Yinghao Aaron Li, Nima Mesgarani

In daily life, we encounter a variety of sounds, both desirable and undesirable, with limited control over their presence and volume.

Language Modelling Large Language Model

Contextual Feature Extraction Hierarchies Converge in Large Language Models and the Brain

no code implementations31 Jan 2024 Gavin Mischler, Yinghao Aaron Li, Stephan Bickel, Ashesh D. Mehta, Nima Mesgarani

We also compare the feature extraction pathways of the LLMs to each other and identify new ways in which high-performing models have converged toward similar hierarchical processing mechanisms.

Exploring Self-Supervised Contrastive Learning of Spatial Sound Event Representation

no code implementations27 Sep 2023 Xilin Jiang, Cong Han, Yinghao Aaron Li, Nima Mesgarani

In this study, we present a simple multi-channel framework for contrastive learning (MC-SimCLR) to encode 'what' and 'where' of spatial audios.

Contrastive Learning Data Augmentation

HiFTNet: A Fast High-Quality Neural Vocoder with Harmonic-plus-Noise Filter and Inverse Short Time Fourier Transform

no code implementations18 Sep 2023 Yinghao Aaron Li, Cong Han, Xilin Jiang, Nima Mesgarani

Subjective evaluations on LJSpeech show that our model significantly outperforms both iSTFTNet and HiFi-GAN, achieving ground-truth-level performance.

Speech Synthesis

SLMGAN: Exploiting Speech Language Model Representations for Unsupervised Zero-Shot Voice Conversion in GANs

no code implementations18 Jul 2023 Yinghao Aaron Li, Cong Han, Nima Mesgarani

In recent years, large-scale pre-trained speech language models (SLMs) have demonstrated remarkable advancements in various generative speech modeling applications, such as text-to-speech synthesis, voice conversion, and speech enhancement.

Generative Adversarial Network Language Modelling +4

StyleTTS 2: Towards Human-Level Text-to-Speech through Style Diffusion and Adversarial Training with Large Speech Language Models

1 code implementation NeurIPS 2023 Yinghao Aaron Li, Cong Han, Vinay S. Raghavan, Gavin Mischler, Nima Mesgarani

In this paper, we present StyleTTS 2, a text-to-speech (TTS) model that leverages style diffusion and adversarial training with large speech language models (SLMs) to achieve human-level TTS synthesis.

Speech Synthesis

DeCoR: Defy Knowledge Forgetting by Predicting Earlier Audio Codes

no code implementations29 May 2023 Xilin Jiang, Yinghao Aaron Li, Nima Mesgarani

Lifelong audio feature extraction involves learning new sound classes incrementally, which is essential for adapting to new data distributions over time.

Acoustic Scene Classification Continual Learning +3

naplib-python: Neural Acoustic Data Processing and Analysis Tools in Python

1 code implementation4 Apr 2023 Gavin Mischler, Vinay Raghavan, Menoua Keshishian, Nima Mesgarani

Recently, the computational neuroscience community has pushed for more transparent and reproducible methods across the field.

Improved Decoding of Attentional Selection in Multi-Talker Environments with Self-Supervised Learned Speech Representation

no code implementations11 Feb 2023 Cong Han, Vishal Choudhari, Yinghao Aaron Li, Nima Mesgarani

Auditory attention decoding (AAD) is a technique used to identify and amplify the talker that a listener is focused on in a noisy environment.

Phoneme-Level BERT for Enhanced Prosody of Text-to-Speech with Grapheme Predictions

2 code implementations20 Jan 2023 Yinghao Aaron Li, Cong Han, Xilin Jiang, Nima Mesgarani

Large-scale pre-trained language models have been shown to be helpful in improving the naturalness of text-to-speech (TTS) models by enabling them to produce more naturalistic prosodic patterns.

StyleTTS-VC: One-Shot Voice Conversion by Knowledge Transfer from Style-Based TTS Models

1 code implementation29 Dec 2022 Yinghao Aaron Li, Cong Han, Nima Mesgarani

Here, we propose a novel approach to learning disentangled speech representation by transfer learning from style-based text-to-speech (TTS) models.

Data Augmentation Transfer Learning +1

StyleTTS: A Style-Based Generative Model for Natural and Diverse Text-to-Speech Synthesis

1 code implementation30 May 2022 Yinghao Aaron Li, Cong Han, Nima Mesgarani

Text-to-Speech (TTS) has recently seen great progress in synthesizing high-quality speech owing to the rapid development of parallel TTS systems, but producing speech with naturalistic prosodic variations, speaking styles and emotional tones remains challenging.

Data Augmentation Self-Supervised Learning +2

Understanding Adaptive, Multiscale Temporal Integration In Deep Speech Recognition Systems

1 code implementation NeurIPS 2021 Menoua Keshishian, Samuel Norman-Haignere, Nima Mesgarani

We show that training causes these integration windows to shrink at early layers and expand at higher layers, creating a hierarchy of integration windows across the network.

speech-recognition Speech Recognition

StarGANv2-VC: A Diverse, Unsupervised, Non-parallel Framework for Natural-Sounding Voice Conversion

2 code implementations21 Jul 2021 Yinghao Aaron Li, Ali Zare, Nima Mesgarani

We present an unsupervised non-parallel many-to-many voice conversion (VC) method using a generative adversarial network (GAN) called StarGAN v2.

Generative Adversarial Network Voice Conversion

Group Communication with Context Codec for Lightweight Source Separation

1 code implementation14 Dec 2020 Yi Luo, Cong Han, Nima Mesgarani

A context codec module, containing a context encoder and a context decoder, is designed as a learnable downsampling and upsampling module to decrease the length of a sequential feature processed by the separation module.

Speech Enhancement Speech Separation

Separating Varying Numbers of Sources with Auxiliary Autoencoding Loss

no code implementations27 Mar 2020 Yi Luo, Nima Mesgarani

Many recent source separation systems are designed to separate a fixed number of sources out of a mixture.


End-to-end Microphone Permutation and Number Invariant Multi-channel Speech Separation

2 code implementations30 Oct 2019 Yi Luo, Zhuo Chen, Nima Mesgarani, Takuya Yoshioka

An important problem in ad-hoc microphone speech separation is how to guarantee the robustness of a system with respect to the locations and numbers of microphones.

Speech Separation

Conv-TasNet: Surpassing Ideal Time-Frequency Magnitude Masking for Speech Separation

16 code implementations20 Sep 2018 Yi Luo, Nima Mesgarani

The majority of the previous methods have formulated the separation problem through the time-frequency representation of the mixed signal, which has several drawbacks, including the decoupling of the phase and magnitude of the signal, the suboptimality of time-frequency representation for speech separation, and the long latency in calculating the spectrograms.

Multi-task Audio Source Seperation Music Source Separation +3

Real-time Single-channel Dereverberation and Separation with Time-domainAudio Separation Network

1 code implementation ISCA Interspeech 2018 Yi Luo, Nima Mesgarani

We investigate the recently proposed Time-domain Audio Sep-aration Network (TasNet) in the task of real-time single-channel speech dereverberation.

Denoising Speech Dereverberation +1

TasNet: time-domain audio separation network for real-time, single-channel speech separation

3 code implementations1 Nov 2017 Yi Luo, Nima Mesgarani

We directly model the signal in the time-domain using an encoder-decoder framework and perform the source separation on nonnegative encoder outputs.

Speech Separation

Lip2AudSpec: Speech reconstruction from silent lip movements video

1 code implementation26 Oct 2017 Hassan Akbari, Himani Arora, Liangliang Cao, Nima Mesgarani

In this study, we propose a deep neural network for reconstructing intelligible speech from silent lip movement videos.

Lip Reading

Visualizing and Understanding Multilayer Perceptron Models: A Case Study in Speech Processing

no code implementations ICML 2017 Tasha Nagamine, Nima Mesgarani

Despite the recent success of deep learning, the nature of the transformations they apply to the input features remains poorly understood.

Speaker-independent Speech Separation with Deep Attractor Network

no code implementations12 Jul 2017 Yi Luo, Zhuo Chen, Nima Mesgarani

A reference point attractor is created in the embedding space to represent each speaker which is defined as the centroid of the speaker in the embedding space.

Speech Separation

Deep attractor network for single-microphone speaker separation

1 code implementation27 Nov 2016 Zhuo Chen, Yi Luo, Nima Mesgarani

We propose a novel deep learning framework for single channel speech separation by creating attractor points in high dimensional embedding space of the acoustic signals which pull together the time-frequency bins corresponding to each source.

Speaker Separation Speech Separation

Deep Clustering and Conventional Networks for Music Separation: Stronger Together

no code implementations18 Nov 2016 Yi Luo, Zhuo Chen, John R. Hershey, Jonathan Le Roux, Nima Mesgarani

Deep clustering is the first method to handle general audio separation scenarios with multiple sources of the same type and an arbitrary number of sources, performing impressively in speaker-independent speech separation tasks.

Clustering Deep Clustering +3

Cannot find the paper you are looking for? You can Submit a new open access paper.