no code implementations • 1 Jul 2024 • Tzu-Han Lin, Chen-An Li, Hung-Yi Lee, Yun-Nung Chen
Reinforcement learning from human feedback (RLHF) is a popular strategy for aligning large language models (LLMs) with desired behaviors.
1 code implementation • 6 Mar 2024 • Chao-Wei Huang, Chen-An Li, Tsu-Yuan Hsu, Chen-Yu Hsu, Yun-Nung Chen
Dense retrieval methods have demonstrated promising performance in multilingual information retrieval, where queries and documents can be in different languages.
no code implementations • 6 Jan 2024 • Chen-An Li, Hung-Yi Lee
Recent advances in Large Language Models (LLMs) have exhibited remarkable proficiency across various tasks.
1 code implementation • 13 Sep 2023 • Chao-Wei Huang, Chen-Yu Hsu, Tsu-Yuan Hsu, Chen-An Li, Yun-Nung Chen
Conversational search provides a natural interface for information retrieval (IR).
no code implementations • 12 May 2023 • Yu-Kuan Fu, Liang-Hsuan Tseng, Jiatong Shi, Chen-An Li, Tsu-Yuan Hsu, Shinji Watanabe, Hung-Yi Lee
We use fully unpaired data to train our unsupervised systems and evaluate our results on CoVoST 2 and CVSS.
no code implementations • 29 Nov 2022 • Tsu-Yuan Hsu, Chen-An Li, Tung-Yu Wu, Hung-Yi Lee
In the first stage, SSL is conducted on the large-scale unlabeled corpus to pre-train a small speech model.
1 code implementation • 13 Oct 2022 • Guan-Ting Lin, Chi-Luen Feng, Wei-Ping Huang, Yuan Tseng, Tzu-Han Lin, Chen-An Li, Hung-Yi Lee, Nigel G. Ward
We find that 13 of the 15 SSL models outperformed the baseline on all the prosody-related tasks.
1 code implementation • 26 Sep 2022 • Tung-Yu Wu, Chen-An Li, Tzu-Han Lin, Tsu-Yuan Hsu, Hung-Yi Lee
Extensive experiments on speech and non-speech audio datasets are conducted to investigate the representation abilities of our ensemble method and its single constituent model.