no code implementations • 20 Feb 2024 • Yanan Chen, Zihao Cui, Yingying Gao, Junlan Feng, Chao Deng, Shilei Zhang
In this study, we present a novel weighting prediction approach, which explicitly learns the task relationships from downstream training information to address the core challenge of universal speech enhancement.
no code implementations • 23 Oct 2023 • Yingying Gao, Shilei Zhang, Zihao Cui, Chao Deng, Junlan Feng
Cascading multiple pre-trained models is an effective way to compose an end-to-end system.
no code implementations • 20 Oct 2023 • Yingying Gao, Shilei Zhang, Zihao Cui, Yanhan Xu, Chao Deng, Junlan Feng
Self-supervised pre-trained models such as HuBERT and WavLM leverage unlabeled speech data for representation learning and offer significantly improve for numerous downstream tasks.