Search Results for author: Kohei Saijo

Found 9 papers, 0 papers with code

A Single Speech Enhancement Model Unifying Dereverberation, Denoising, Speaker Counting, Separation, and Extraction

no code implementations12 Oct 2023 Kohei Saijo, Wangyou Zhang, Zhong-Qiu Wang, Shinji Watanabe, Tetsunori Kobayashi, Tetsuji Ogawa

We propose a multi-task universal speech enhancement (MUSE) model that can perform five speech enhancement (SE) tasks: dereverberation, denoising, speech separation (SS), target speaker extraction (TSE), and speaker counting.

Denoising Speech Enhancement +2

Toward Universal Speech Enhancement for Diverse Input Conditions

no code implementations29 Sep 2023 Wangyou Zhang, Kohei Saijo, Zhong-Qiu Wang, Shinji Watanabe, Yanmin Qian

Currently, there is no universal SE approach that can effectively handle diverse input conditions with a single model.

Denoising Speech Enhancement

Exploring Speech Enhancement for Low-resource Speech Synthesis

no code implementations19 Sep 2023 Zhaoheng Ni, Sravya Popuri, Ning Dong, Kohei Saijo, Xiaohui Zhang, Gael Le Lan, Yangyang Shi, Vikas Chandra, Changhan Wang

High-quality and intelligible speech is essential to text-to-speech (TTS) model training, however, obtaining high-quality data for low-resource languages is challenging and expensive.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

Remixing-based Unsupervised Source Separation from Scratch

no code implementations1 Sep 2023 Kohei Saijo, Tetsuji Ogawa

A student model is then trained to separate the pseudo-mixtures using either the teacher's outputs or the initial mixtures as supervision.

Self-Supervised Learning

Self-Remixing: Unsupervised Speech Separation via Separation and Remixing

no code implementations18 Nov 2022 Kohei Saijo, Tetsuji Ogawa

Specifically, the shuffler first separates observed mixtures and makes pseudo-mixtures by shuffling and remixing the separated signals.

Domain Adaptation Semi-supervised Domain Adaptation +1

Spatial Loss for Unsupervised Multi-channel Source Separation

no code implementations1 Apr 2022 Kohei Saijo, Robin Scheibler

With the proposed loss, we train the neural separators based on minimum variance distortionless response (MVDR) beamforming and independent vector analysis (IVA).

blind source separation

Remix-cycle-consistent Learning on Adversarially Learned Separator for Accurate and Stable Unsupervised Speech Separation

no code implementations26 Mar 2022 Kohei Saijo, Tetsuji Ogawa

A new learning algorithm for speech separation networks is designed to explicitly reduce residual noise and artifacts in the separated signal in an unsupervised manner.

Speech Separation

Independence-based Joint Dereverberation and Separation with Neural Source Model

no code implementations13 Oct 2021 Kohei Saijo, Robin Scheibler

We introduce a neural network in the framework of time-decorrelation iterative source steering, which is an extension of independent vector analysis to joint dereverberation and separation.

Speech Dereverberation

Cannot find the paper you are looking for? You can Submit a new open access paper.