1 code implementation • 17 Mar 2024 • Feifan Song, Bowen Yu, Hao Lang, Haiyang Yu, Fei Huang, Houfeng Wang, Yongbin Li
Additionally, the concept of diversity for prompts can be more complex than responses that are typically quantified by single digits.
1 code implementation • 14 Feb 2024 • Feifan Song, Yuxuan Fan, Xin Zhang, Peiyi Wang, Houfeng Wang
Large Language Models (LLMs) rely on Human Preference Alignment (HPA) to ensure the generation of safe content.
no code implementations • 5 Sep 2023 • Peiyi Wang, Lei LI, Liang Chen, Feifan Song, Binghuai Lin, Yunbo Cao, Tianyu Liu, Zhifang Sui
To address this problem, we introduce an \textit{Alignment Fine-Tuning (AFT)} paradigm, which involves three steps: 1) fine-tuning LLMs with COT training data; 2) generating multiple COT responses for each question, and categorizing them into positive and negative ones based on whether they achieve the correct answer; 3) calibrating the scores of positive and negative responses given by LLMs with a novel constraint alignment loss.
1 code implementation • 30 Jun 2023 • Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang
In this manner, PRO effectively transforms human alignment into aligning the probability ranking of n responses generated by LLM with the preference ranking of humans towards these responses.
no code implementations • 14 Apr 2023 • Minghao Li, Yingxiu Zhao, Bowen Yu, Feifan Song, Hangyu Li, Haiyang Yu, Zhoujun Li, Fei Huang, Yongbin Li
(2) How can we enhance LLMs' ability to utilize tools?
1 code implementation • 7 Oct 2022 • Feifan Song, Lianzhe Huang, Houfeng Wang
Multi-intent Spoken Language Understanding has great potential for widespread implementation.
no code implementations • 7 Apr 2022 • Wenqiang Lei, Yao Zhang, Feifan Song, Hongru Liang, Jiaxin Mao, Jiancheng Lv, Zhenglu Yang, Tat-Seng Chua
To this end, we contribute to advance the study of the proactive dialogue policy to a more natural and challenging setting, i. e., interacting dynamically with users.