no code implementations • 18 Jan 2024 • Hui Jiao, Bei Peng, Lu Zong, Xiaojun Zhang, Xinwei Li
ChatGPT, as a language model based on large-scale pre-training, has exerted a profound influence on the domain of machine translation.
no code implementations • 14 Nov 2023 • Xinwei Li, Li Lin, Shuai Wang, Chen Qian
The first stage pre-trains the student model on a large number of filtered multi-modal datasets.
no code implementations • 7 Dec 2020 • Xinwei Li, Yuanyuan Zhang, Xiaodan Zhuang, Daben Liu
We demonstrate that f-SpecAugment is more effective than the utterance level SpecAugment for deep CNN based hybrid models.
no code implementations • 10 Jul 2020 • Rongqing Huang, Ossama Abdel-hamid, Xinwei Li, Gunnar Evermann
In ASR, many utterances contain rich named entities.
no code implementations • 30 Jan 2020 • Xiuxian Xu, Pei Wang, Xiaozheng Gan, Ya-Xin Li, Li Zhang, Qing Zhang, Mei Zhou, Yinghui Zhao, Xinwei Li
In coarse registration, point clouds produced by each scan are projected onto a spherical surface to generate a series of two-dimensional (2D) images, which are used to estimate the initial positions of multiple scans.
no code implementations • 8 Jul 2019 • Felix Weninger, Jesús Andrés-Ferrer, Xinwei Li, Puming Zhan
Sequence-to-sequence (seq2seq) based ASR systems have shown state-of-the-art performances while having clear advantages in terms of simplicity.