Search Results for author: Zhong-Qiu Wang

Found 24 papers, 5 papers with code

ctPuLSE: Close-Talk, and Pseudo-Label Based Far-Field, Speech Enhancement

no code implementations28 Jul 2024 Zhong-Qiu Wang

In this context, assuming that a training set consisting of real-recorded pairs of close-talk and far-field mixtures is available, we propose to address this difficulty via close-talk speech enhancement, where an enhancement model is first trained on simulated mixtures to enhance real-recorded close-talk mixtures and the estimated close-talk speech can then be utilized as a supervision (i. e., pseudo-label) for training far-field speech enhancement models directly on the paired real-recorded far-field mixtures.

Pseudo Label Speech Enhancement

Evolutionary Prompt Design for LLM-Based Post-ASR Error Correction

1 code implementation23 Jul 2024 Rithik Sachdev, Zhong-Qiu Wang, Chao-Han Huck Yang

One representative approach is to leverage in-context learning to prompt LLMs so that a better hypothesis can be generated by the LLMs based on a carefully-designed prompt and an $N$-best list of hypotheses produced by ASR systems.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

Cross-Talk Reduction

no code implementations30 May 2024 Zhong-Qiu Wang, Anurag Kumar, Shinji Watanabe

While far-field multi-talker mixtures are recorded, each speaker can wear a close-talk microphone so that close-talk mixtures can be recorded at the same time.

Speech Separation

SuperM2M: Supervised and Mixture-to-Mixture Co-Learning for Speech Enhancement and Robust ASR

no code implementations15 Mar 2024 Zhong-Qiu Wang

To address this, this paper investigates training enhancement models directly on real target-domain data.

Speaker Separation Speech Enhancement

Mixture to Mixture: Leveraging Close-talk Mixtures as Weak-supervision for Speech Separation

no code implementations14 Feb 2024 Zhong-Qiu Wang

We propose mixture to mixture (M2M) training, a weakly-supervised neural speech separation algorithm that leverages close-talk mixtures as a weak supervision for training discriminative models to separate far-field mixtures.

Speaker Separation Speech Separation

USDnet: Unsupervised Speech Dereverberation via Neural Forward Filtering

no code implementations1 Feb 2024 Zhong-Qiu Wang

We show that this novel methodology can promote unsupervised dereverberation of single-source reverberant speech.

Speech Dereverberation

A Single Speech Enhancement Model Unifying Dereverberation, Denoising, Speaker Counting, Separation, and Extraction

no code implementations12 Oct 2023 Kohei Saijo, Wangyou Zhang, Zhong-Qiu Wang, Shinji Watanabe, Tetsunori Kobayashi, Tetsuji Ogawa

We propose a multi-task universal speech enhancement (MUSE) model that can perform five speech enhancement (SE) tasks: dereverberation, denoising, speech separation (SS), target speaker extraction (TSE), and speaker counting.

Denoising Speech Enhancement +2

Toward Universal Speech Enhancement for Diverse Input Conditions

no code implementations29 Sep 2023 Wangyou Zhang, Kohei Saijo, Zhong-Qiu Wang, Shinji Watanabe, Yanmin Qian

Currently, there is no universal SE approach that can effectively handle diverse input conditions with a single model.

Denoising Speech Enhancement

The Multimodal Information Based Speech Processing (MISP) 2023 Challenge: Audio-Visual Target Speaker Extraction

no code implementations15 Sep 2023 Shilong Wu, Chenxi Wang, Hang Chen, Yusheng Dai, Chenyue Zhang, Ruoyu Wang, Hongbo Lan, Jun Du, Chin-Hui Lee, Jingdong Chen, Shinji Watanabe, Sabato Marco Siniscalchi, Odette Scharenborg, Zhong-Qiu Wang, Jia Pan, Jianqing Gao

This pioneering effort aims to set the first benchmark for the AVTSE task, offering fresh insights into enhancing the ac-curacy of back-end speech recognition systems through AVTSE in challenging and real acoustic environments.

Audio-Visual Speech Recognition speech-recognition +2

Neural Speech Enhancement with Very Low Algorithmic Latency and Complexity via Integrated Full- and Sub-Band Modeling

no code implementations18 Apr 2023 Zhong-Qiu Wang, Samuele Cornell, Shukjae Choi, Younglo Lee, Byeong-Yeol Kim, Shinji Watanabe

We propose FSB-LSTM, a novel long short-term memory (LSTM) based architecture that integrates full- and sub-band (FSB) modeling, for single- and multi-channel speech enhancement in the short-time Fourier transform (STFT) domain.

Speech Enhancement

Multi-Channel Target Speaker Extraction with Refinement: The WavLab Submission to the Second Clarity Enhancement Challenge

no code implementations15 Feb 2023 Samuele Cornell, Zhong-Qiu Wang, Yoshiki Masuyama, Shinji Watanabe, Manuel Pariente, Nobutaka Ono

To address the challenges encountered in the CEC2 setting, we introduce four major novelties: (1) we extend the state-of-the-art TF-GridNet model, originally designed for monaural speaker separation, for multi-channel, causal speech enhancement, and large improvements are observed by replacing the TCNDenseNet used in iNeuBe with this new architecture; (2) we leverage a recent dual window size approach with future-frame prediction to ensure that iNueBe-X satisfies the 5 ms constraint on algorithmic latency required by CEC2; (3) we introduce a novel speaker-conditioning branch for TF-GridNet to achieve target speaker extraction; (4) we propose a fine-tuning step, where we compute an additional loss with respect to the target speaker signal compensated with the listener audiogram.

Speaker Separation Speech Enhancement +1

Tackling the Cocktail Fork Problem for Separation and Transcription of Real-World Soundtracks

no code implementations14 Dec 2022 Darius Petermann, Gordon Wichern, Aswin Shanmugam Subramanian, Zhong-Qiu Wang, Jonathan Le Roux

In this paper, we focus on the cocktail fork problem, which takes a three-pronged approach to source separation by separating an audio mixture such as a movie soundtrack or podcast into the three broad categories of speech, music, and sound effects (SFX - understood to include ambient noise and natural sound events).

Action Detection Activity Detection +4

ESPnet-SE++: Speech Enhancement for Robust Speech Recognition, Translation, and Understanding

1 code implementation19 Jul 2022 Yen-Ju Lu, Xuankai Chang, Chenda Li, Wangyou Zhang, Samuele Cornell, Zhaoheng Ni, Yoshiki Masuyama, Brian Yan, Robin Scheibler, Zhong-Qiu Wang, Yu Tsao, Yanmin Qian, Shinji Watanabe

To showcase such integration, we performed experiments on carefully designed synthetic datasets for noisy-reverberant multi-channel ST and SLU tasks, which can be used as benchmark corpora for future research.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +5

Locate This, Not That: Class-Conditioned Sound Event DOA Estimation

no code implementations8 Mar 2022 Olga Slizovskaia, Gordon Wichern, Zhong-Qiu Wang, Jonathan Le Roux

Existing systems for sound event localization and detection (SELD) typically operate by estimating a source location for all classes at every time instant.

Sound Event Localization and Detection

Conditional Diffusion Probabilistic Model for Speech Enhancement

2 code implementations10 Feb 2022 Yen-Ju Lu, Zhong-Qiu Wang, Shinji Watanabe, Alexander Richard, Cheng Yu, Yu Tsao

Speech enhancement is a critical component of many user-oriented audio applications, yet current systems still suffer from distorted and unnatural outputs.

Speech Enhancement Speech Synthesis

The Cocktail Fork Problem: Three-Stem Audio Separation for Real-World Soundtracks

3 code implementations19 Oct 2021 Darius Petermann, Gordon Wichern, Zhong-Qiu Wang, Jonathan Le Roux

The cocktail party problem aims at isolating any source of interest within a complex acoustic scene, and has long inspired audio source separation research.

Audio Source Separation

Multi-microphone Complex Spectral Mapping for Utterance-wise and Continuous Speech Separation

2 code implementations4 Oct 2020 Zhong-Qiu Wang, Peidong Wang, DeLiang Wang

Although our system is trained on simulated room impulse responses (RIR) based on a fixed number of microphones arranged in a given geometry, it generalizes well to a real array with the same geometry.

Speaker Separation Speech Separation

Sequential Multi-Frame Neural Beamforming for Speech Separation and Enhancement

no code implementations18 Nov 2019 Zhong-Qiu Wang, Hakan Erdogan, Scott Wisdom, Kevin Wilson, Desh Raj, Shinji Watanabe, Zhuo Chen, John R. Hershey

This work introduces sequential neural beamforming, which alternates between neural network based spectral separation and beamforming based spatial separation.

Speaker Separation Speech Enhancement +3

Deep Learning Based Phase Reconstruction for Speaker Separation: A Trigonometric Perspective

no code implementations22 Nov 2018 Zhong-Qiu Wang, Ke Tan, DeLiang Wang

This study investigates phase reconstruction for deep learning based monaural talker-independent speaker separation in the short-time Fourier transform (STFT) domain.

Speaker Separation

End-to-End Speech Separation with Unfolded Iterative Phase Reconstruction

no code implementations26 Apr 2018 Zhong-Qiu Wang, Jonathan Le Roux, DeLiang Wang, John R. Hershey

In addition, we train through unfolded iterations of a phase reconstruction algorithm, represented as a series of STFT and inverse STFT layers.

Speech Separation

Cannot find the paper you are looking for? You can Submit a new open access paper.