1 code implementation • 10 Oct 2024 • Bofei Gao, Feifan Song, Zhe Yang, Zefan Cai, Yibo Miao, Qingxiu Dong, Lei LI, Chenghao Ma, Liang Chen, Runxin Xu, Zhengyang Tang, Benyou Wang, Daoguang Zan, Shanghaoran Quan, Ge Zhang, Lei Sha, Yichang Zhang, Xuancheng Ren, Tianyu Liu, Baobao Chang
However, existing benchmarks like GSM8K or MATH are now being solved with high accuracy (e. g., OpenAI o1 achieves 94. 8% on MATH dataset), indicating their inadequacy for truly challenging these models.
1 code implementation • 4 Sep 2024 • Bofei Gao, Feifan Song, Yibo Miao, Zefan Cai, Zhe Yang, Liang Chen, Helan Hu, Runxin Xu, Qingxiu Dong, Ce Zheng, Shanghaoran Quan, Wen Xiao, Ge Zhang, Daoguang Zan, Keming Lu, Bowen Yu, Dayiheng Liu, Zeyu Cui, Jian Yang, Lei Sha, Houfeng Wang, Zhifang Sui, Peiyi Wang, Tianyu Liu, Baobao Chang
Finally, based on our unified perspective, we explore the challenges and future research directions for aligning large language models with human preferences.
1 code implementation • 20 May 2024 • Yuanwu Xu, Feifan Song, Haofeng Zhang
To address this issue, we propose a network learning Spatial Similarity Distribution (SSD) for few-shot object counting, which preserves the spatial structure of exemplar features and calculates a 4D similarity pyramid point-to-point between the query features and exemplar features, capturing the complete distribution information for each point in the 4D similarity space.
Ranked #5 on Object Counting on FSC147
1 code implementation • 17 Mar 2024 • Feifan Song, Bowen Yu, Hao Lang, Haiyang Yu, Fei Huang, Houfeng Wang, Yongbin Li
Additionally, the concept of diversity for prompts can be more complex than responses that are typically quantified by single digits.
1 code implementation • 14 Feb 2024 • Feifan Song, Yuxuan Fan, Xin Zhang, Peiyi Wang, Houfeng Wang
Large Language Models (LLMs) rely on Human Preference Alignment (HPA) to ensure the generation of safe content.
no code implementations • 5 Sep 2023 • Peiyi Wang, Lei LI, Liang Chen, Feifan Song, Binghuai Lin, Yunbo Cao, Tianyu Liu, Zhifang Sui
To address this problem, we introduce an \textit{Alignment Fine-Tuning (AFT)} paradigm, which involves three steps: 1) fine-tuning LLMs with COT training data; 2) generating multiple COT responses for each question, and categorizing them into positive and negative ones based on whether they achieve the correct answer; 3) calibrating the scores of positive and negative responses given by LLMs with a novel constraint alignment loss.
1 code implementation • 30 Jun 2023 • Feifan Song, Bowen Yu, Minghao Li, Haiyang Yu, Fei Huang, Yongbin Li, Houfeng Wang
In this manner, PRO effectively transforms human alignment into aligning the probability ranking of n responses generated by LLM with the preference ranking of humans towards these responses.
1 code implementation • 14 Apr 2023 • Minghao Li, Yingxiu Zhao, Bowen Yu, Feifan Song, Hangyu Li, Haiyang Yu, Zhoujun Li, Fei Huang, Yongbin Li
(2) How can we enhance LLMs' ability to utilize tools?
1 code implementation • 7 Oct 2022 • Feifan Song, Lianzhe Huang, Houfeng Wang
Multi-intent Spoken Language Understanding has great potential for widespread implementation.
no code implementations • 7 Apr 2022 • Wenqiang Lei, Yao Zhang, Feifan Song, Hongru Liang, Jiaxin Mao, Jiancheng Lv, Zhenglu Yang, Tat-Seng Chua
To this end, we contribute to advance the study of the proactive dialogue policy to a more natural and challenging setting, i. e., interacting dynamically with users.