no code implementations • 21 Mar 2024 • Yukun Zhao, Lingyong Yan, Weiwei Sun, Guoliang Xing, Shuaiqiang Wang, Chong Meng, Zhicong Cheng, Zhaochun Ren, Dawei Yin
The training process is accomplished by self-rewards inferred from the trained model at the first stage without referring to external human preference resources.
no code implementations • 5 Mar 2024 • Zhengliang Shi, Shen Gao, Xiuyi Chen, Lingyong Yan, Haibo Shi, Dawei Yin, Zhumin Chen, Pengjie Ren, Suzan Verberne, Zhaochun Ren
Tool learning empowers large language models (LLMs) as agents to use external tools to extend their capability.
1 code implementation • 17 Feb 2024 • Yougang Lyu, Lingyong Yan, Shuaiqiang Wang, Haibo Shi, Dawei Yin, Pengjie Ren, Zhumin Chen, Maarten de Rijke, Zhaochun Ren
To address these problems, we propose a knowledge-aware fine-tuning (KnowTuning) method to improve fine-grained and coarse-grained knowledge awareness of LLMs.
1 code implementation • 2 Nov 2023 • Weiwei Sun, Zheng Chen, Xinyu Ma, Lingyong Yan, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, Zhaochun Ren
Furthermore, our approach surpasses the performance of existing supervised methods like monoT5 and is on par with the state-of-the-art zero-shot methods.
no code implementations • 27 Oct 2023 • Yukun Zhao, Lingyong Yan, Weiwei Sun, Guoliang Xing, Chong Meng, Shuaiqiang Wang, Zhicong Cheng, Zhaochun Ren, Dawei Yin
In this paper, we propose a novel self-detection method to detect which questions that a LLM does not know that are prone to generate nonfactual results.
no code implementations • 25 Oct 2023 • Yukun Zhao, Lingyong Yan, Weiwei Sun, Chong Meng, Shuaiqiang Wang, Zhicong Cheng, Zhaochun Ren, Dawei Yin
Dialogue assessment plays a critical role in the development of open-domain dialogue systems.
1 code implementation • 19 Apr 2023 • Weiwei Sun, Lingyong Yan, Xinyu Ma, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, Zhaochun Ren
In this paper, we first investigate generative LLMs such as ChatGPT and GPT-4 for relevance ranking in IR.
1 code implementation • EMNLP 2021 • Lingyong Yan, Xianpei Han, Le Sun
Bootstrapping has become the mainstream method for entity set expansion.
no code implementations • ACL 2021 • Fangchao Liu, Lingyong Yan, Hongyu Lin, Xianpei Han, Le Sun
Open relation extraction aims to cluster relation instances referring to the same underlying relation, which is a critical step for general relation extraction.
1 code implementation • ACL 2021 • Boxi Cao, Hongyu Lin, Xianpei Han, Le Sun, Lingyong Yan, Meng Liao, Tong Xue, Jin Xu
Previous literatures show that pre-trained masked language models (MLMs) such as BERT can achieve competitive factual knowledge extraction performance on some datasets, indicating that MLMs can potentially be a reliable knowledge source.
1 code implementation • 8 Dec 2020 • Lingyong Yan, Xianpei Han, Le Sun, Fangchao Liu, Ning Bian
By re-organizing all sentences about an entity as a document and extracting relations via querying the document with relation-specific questions, the document-based DS paradigm can simultaneously encode and exploit all sentence-level, inter-sentence-level, and entity-level evidence.
Ranked #1 on Relationship Extraction (Distant Supervised) on NYT
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Lingyong Yan, Xianpei Han, Ben He, Le Sun
Bootstrapping for entity set expansion (ESE) has been studied for a long period, which expands new entities using only a few seed entities as supervision.
no code implementations • IJCNLP 2019 • Lingyong Yan, Xianpei Han, Le Sun, Ben He
Bootstrapping for Entity Set Expansion (ESE) aims at iteratively acquiring new instances of a specific target category.