Search Results for author: Lingyong Yan

Found 13 papers, 7 papers with code

Improving the Robustness of Large Language Models via Consistency Alignment

no code implementations21 Mar 2024 Yukun Zhao, Lingyong Yan, Weiwei Sun, Guoliang Xing, Shuaiqiang Wang, Chong Meng, Zhicong Cheng, Zhaochun Ren, Dawei Yin

The training process is accomplished by self-rewards inferred from the trained model at the first stage without referring to external human preference resources.

Instruction Following Response Generation

Learning to Use Tools via Cooperative and Interactive Agents

no code implementations5 Mar 2024 Zhengliang Shi, Shen Gao, Xiuyi Chen, Lingyong Yan, Haibo Shi, Dawei Yin, Zhumin Chen, Pengjie Ren, Suzan Verberne, Zhaochun Ren

Tool learning empowers large language models (LLMs) as agents to use external tools to extend their capability.

KnowTuning: Knowledge-aware Fine-tuning for Large Language Models

1 code implementation17 Feb 2024 Yougang Lyu, Lingyong Yan, Shuaiqiang Wang, Haibo Shi, Dawei Yin, Pengjie Ren, Zhumin Chen, Maarten de Rijke, Zhaochun Ren

To address these problems, we propose a knowledge-aware fine-tuning (KnowTuning) method to improve fine-grained and coarse-grained knowledge awareness of LLMs.

Question Answering

Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers

1 code implementation2 Nov 2023 Weiwei Sun, Zheng Chen, Xinyu Ma, Lingyong Yan, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yin, Zhaochun Ren

Furthermore, our approach surpasses the performance of existing supervised methods like monoT5 and is on par with the state-of-the-art zero-shot methods.

Prompt Engineering

Knowing What LLMs DO NOT Know: A Simple Yet Effective Self-Detection Method

no code implementations27 Oct 2023 Yukun Zhao, Lingyong Yan, Weiwei Sun, Guoliang Xing, Chong Meng, Shuaiqiang Wang, Zhicong Cheng, Zhaochun Ren, Dawei Yin

In this paper, we propose a novel self-detection method to detect which questions that a LLM does not know that are prone to generate nonfactual results.

Element Intervention for Open Relation Extraction

no code implementations ACL 2021 Fangchao Liu, Lingyong Yan, Hongyu Lin, Xianpei Han, Le Sun

Open relation extraction aims to cluster relation instances referring to the same underlying relation, which is a critical step for general relation extraction.

Relation Relation Extraction

Knowledgeable or Educated Guess? Revisiting Language Models as Knowledge Bases

1 code implementation ACL 2021 Boxi Cao, Hongyu Lin, Xianpei Han, Le Sun, Lingyong Yan, Meng Liao, Tong Xue, Jin Xu

Previous literatures show that pre-trained masked language models (MLMs) such as BERT can achieve competitive factual knowledge extraction performance on some datasets, indicating that MLMs can potentially be a reliable knowledge source.

From Bag of Sentences to Document: Distantly Supervised Relation Extraction via Machine Reading Comprehension

1 code implementation8 Dec 2020 Lingyong Yan, Xianpei Han, Le Sun, Fangchao Liu, Ning Bian

By re-organizing all sentences about an entity as a document and extracting relations via querying the document with relation-specific questions, the document-based DS paradigm can simultaneously encode and exploit all sentence-level, inter-sentence-level, and entity-level evidence.

Denoising Machine Reading Comprehension +3

Global Bootstrapping Neural Network for Entity Set Expansion

1 code implementation Findings of the Association for Computational Linguistics 2020 Lingyong Yan, Xianpei Han, Ben He, Le Sun

Bootstrapping for entity set expansion (ESE) has been studied for a long period, which expands new entities using only a few seed entities as supervision.

Learning to Bootstrap for Entity Set Expansion

no code implementations IJCNLP 2019 Lingyong Yan, Xianpei Han, Le Sun, Ben He

Bootstrapping for Entity Set Expansion (ESE) aims at iteratively acquiring new instances of a specific target category.

Cannot find the paper you are looking for? You can Submit a new open access paper.