no code implementations • 20 Mar 2024 • Huali Zhou, Yuke Lin, Dong Liu, Ming Li
This work aims to promote Chinese opera research in both musical and speech domains, with a primary focus on overcoming the data limitations.
no code implementations • 7 Oct 2023 • Ze Li, Yuke Lin, Ning Jiang, Xiaoyi Qin, Guoqing Zhao, Haiying Wu, Ming Li
Utilizing the pseudo-labeling algorithm with large-scale unlabeled data becomes crucial for semi-supervised domain adaptation in speaker verification tasks.
1 code implementation • 25 Sep 2023 • Yuke Lin, Xiaoyi Qin, Ning Jiang, Guoqing Zhao, Ming Li
It is widely acknowledged that discriminative representation for speaker verification can be extracted from verbal speech.
no code implementations • 17 Aug 2023 • Ze Li, Yuke Lin, Xiaoyi Qin, Ning Jiang, Guoqing Zhao, Ming Li
For Track 1, we utilize a network structure based on ResNet for training.
no code implementations • 15 Aug 2023 • Ming Cheng, Weiqing Wang, Xiaoyi Qin, Yuke Lin, Ning Jiang, Guoqing Zhao, Ming Li
This paper describes the DKU-MSXF submission to track 4 of the VoxCeleb Speaker Recognition Challenge 2023 (VoxSRC-23).
no code implementations • 14 Aug 2023 • Yuke Lin, Xiaoyi Qin, Guoqing Zhao, Ming Cheng, Ning Jiang, Haiyang Wu, Ming Li
In this paper, we introduce a large-scale and high-quality audio-visual speaker verification dataset, named VoxBlink.
no code implementations • 28 Oct 2022 • Yuke Lin, Xiaoyi Qin, Huahua Cui, Zhenyi Zhu, Ming Li
We collect a set of clips with laughter components by conducting a laughter detection script on VoxCeleb and part of the CN-Celeb dataset.