1 code implementation • EMNLP 2021 • Yuanmeng Yan, Rumei Li, Sirui Wang, Hongzhi Zhang, Zan Daoguang, Fuzheng Zhang, Wei Wu, Weiran Xu
The key challenge of question answering over knowledge bases (KBQA) is the inconsistency between the natural language questions and the reasoning paths in the knowledge base (KB).
1 code implementation • 30 Dec 2024 • Jianfei Zhang, Jun Bai, Bei Li, Yanmeng Wang, Rumei Li, Chenghua Lin, Wenge Rong
Aligning Large Language Models (LLMs) with general human preferences has been proved crucial in improving the interaction quality between LLMs and human.
1 code implementation • 5 Aug 2024 • Muxi Diao, Rumei Li, Shiyang Liu, Guogang Liao, Jingang Wang, Xunliang Cai, Weiran Xu
As large language models (LLMs) continue to advance in capability and influence, ensuring their security and preventing harmful outputs has become crucial.
1 code implementation • 28 Aug 2023 • Guanting Dong, Rumei Li, Sirui Wang, Yupeng Zhang, Yunsen Xian, Weiran Xu
Knowledge Base Question Answering (KBQA) aims to answer natural language questions with factual information such as entities and relations in KBs.
Ranked #3 on
Knowledge Base Question Answering
on WebQuestionsSP
no code implementations • 16 Oct 2022 • Jian Song, Di Liang, Rumei Li, Yuntao Li, Sirui Wang, Minlong Peng, Wei Wu, Yongxin Yu
Transformer-based pre-trained models like BERT have achieved great progress on Semantic Sentence Matching.
1 code implementation • 8 Mar 2022 • LiWen Wang, Rumei Li, Yang Yan, Yuanmeng Yan, Sirui Wang, Wei Wu, Weiran Xu
Recently, prompt-based methods have achieved significant performance in few-shot learning scenarios by bridging the gap between language model pre-training and fine-tuning for downstream tasks.
1 code implementation • ACL 2021 • Yuanmeng Yan, Rumei Li, Sirui Wang, Fuzheng Zhang, Wei Wu, Weiran Xu
Learning high-quality sentence representations benefits a wide range of natural language processing tasks.