Search Results for author: Hongyu Lin

Found 73 papers, 43 papers with code

CATAMARAN: A Cross-lingual Long Text Abstractive Summarization Dataset

no code implementations LREC 2022 Zheng Chen, Hongyu Lin

Cross-lingual summarization, which produces the summary in one language from a given source document in another language, could be extremely helpful for humans to obtain information across the world.

Abstractive Text Summarization Cross-Lingual Abstractive Summarization

StructRAG: Boosting Knowledge Intensive Reasoning of LLMs via Inference-time Hybrid Information Structurization

no code implementations11 Oct 2024 Zhuoqun Li, Xuanang Chen, Haiyang Yu, Hongyu Lin, Yaojie Lu, Qiaoyu Tang, Fei Huang, Xianpei Han, Le Sun, Yongbin Li

Retrieval-augmented generation (RAG) is a key means to effectively enhance large language models (LLMs) in many knowledge-based tasks.

Multi-Facet Counterfactual Learning for Content Quality Evaluation

no code implementations10 Oct 2024 Jiasheng Zheng, Hongyu Lin, Boxi Cao, Meng Liao, Yaojie Lu, Xianpei Han, Le Sun

Evaluating the quality of documents is essential for filtering valuable content from the current massive amount of information.

Contrastive Learning counterfactual

Seg2Act: Global Context-aware Action Generation for Document Logical Structuring

1 code implementation9 Oct 2024 Zichao Li, Shaojie He, Meng Liao, Xuanang Chen, Yaojie Lu, Hongyu Lin, Yanxiong Lu, Xianpei Han, Le Sun

Document logical structuring aims to extract the underlying hierarchical structure of documents, which is crucial for document intelligence.

Action Generation Transfer Learning

Rethinking Reward Model Evaluation: Are We Barking up the Wrong Tree?

no code implementations8 Oct 2024 Xueru Wen, Jie Lou, Yaojie Lu, Hongyu Lin, Xing Yu, Xinyu Lu, Ben He, Xianpei Han, Debing Zhang, Le Sun

Although this method is straightforward and widely adopted, the relationship between RM accuracy and downstream policy performance remains under-explored.

CRUXEval-X: A Benchmark for Multilingual Code Reasoning, Understanding and Execution

no code implementations23 Aug 2024 Ruiyang Xu, Jialun Cao, Yaojie Lu, Hongyu Lin, Xianpei Han, Ben He, Shing-Chi Cheung, Le Sun

However, there is an unignorable programming language bias in existing code benchmarks -- over 95% code generation benchmarks are dominated by Python, leaving the LLMs' capabilities in other programming languages such as Java and C/C++ unknown.

Code Generation HumanEval

DOMAINEVAL: An Auto-Constructed Benchmark for Multi-Domain Code Generation

no code implementations23 Aug 2024 Qiming Zhu, Jialun Cao, Yaojie Lu, Hongyu Lin, Xianpei Han, Le Sun, Shing-Chi Cheung

We notice that LLMs are generally good at computation tasks while falling short on cryptography and system coding tasks.

Code Generation HumanEval

REInstruct: Building Instruction Data from Unlabeled Corpus

1 code implementation20 Aug 2024 Shu Chen, Xinyan Guan, Yaojie Lu, Hongyu Lin, Xianpei Han, Le Sun

Manually annotating instruction data for large language models is difficult, costly, and hard to scale.

Beyond Correctness: Benchmarking Multi-dimensional Code Generation for Large Language Models

1 code implementation16 Jul 2024 Jiasheng Zheng, Boxi Cao, Zhengzhao Ma, Ruotong Pan, Hongyu Lin, Yaojie Lu, Xianpei Han, Le Sun

In recent years, researchers have proposed numerous benchmarks to evaluate the impressive coding capabilities of large language models (LLMs).

Benchmarking Code Generation

On-Policy Fine-grained Knowledge Feedback for Hallucination Mitigation

no code implementations18 Jun 2024 Xueru Wen, Xinyu Lu, Xinyan Guan, Yaojie Lu, Hongyu Lin, Ben He, Xianpei Han, Le Sun

Previous learning-based methods focus on detecting knowledge boundaries and finetuning models with instance-level feedback, but they suffer from inaccurate signals due to off-policy data sampling and coarse-grained feedback.

Hallucination Response Generation

Towards Scalable Automated Alignment of LLMs: A Survey

1 code implementation3 Jun 2024 Boxi Cao, Keming Lu, Xinyu Lu, Jiawei Chen, Mengjie Ren, Hao Xiang, Peilin Liu, Yaojie Lu, Ben He, Xianpei Han, Le Sun, Hongyu Lin, Bowen Yu

Alignment is the most critical step in building large language models (LLMs) that meet human needs.

Survey

Match, Compare, or Select? An Investigation of Large Language Models for Entity Matching

1 code implementation27 May 2024 Tianshu Wang, Xiaoyang Chen, Hongyu Lin, Xuanang Chen, Xianpei Han, Hao Wang, Zhenyu Zeng, Le Sun

Based on our findings, we further design a compound entity matching framework (ComEM) that leverages the composition of multiple strategies and LLMs.

Entity Resolution

Base of RoPE Bounds Context Length

no code implementations23 May 2024 Xin Men, Mingyu Xu, Bingning Wang, Qingyu Zhang, Hongyu Lin, Xianpei Han, WeiPeng Chen

We revisit the role of RoPE in LLMs and propose a novel property of long-term decay, we derive that the \textit{base of RoPE bounds context length}: there is an absolute lower bound for the base value to obtain certain context length capability.

Position

Towards Universal Dense Blocking for Entity Resolution

2 code implementations23 Apr 2024 Tianshu Wang, Hongyu Lin, Xianpei Han, Xiaoyang Chen, Boxi Cao, Le Sun

Blocking is a critical step in entity resolution, and the emergence of neural network-based representation models has led to the development of dense blocking as a promising approach for exploring deep semantics in blocking.

Blocking Contrastive Learning

Spiral of Silence: How is Large Language Model Killing Information Retrieval? -- A Case Study on Open Domain Question Answering

1 code implementation16 Apr 2024 Xiaoyang Chen, Ben He, Hongyu Lin, Xianpei Han, Tianshu Wang, Boxi Cao, Le Sun, Yingfei Sun

The practice of Retrieval-Augmented Generation (RAG), which integrates Large Language Models (LLMs) with retrieval systems, has become increasingly prevalent.

Information Retrieval Language Modelling +3

Not All Contexts Are Equal: Teaching LLMs Credibility-aware Generation

1 code implementation10 Apr 2024 Ruotong Pan, Boxi Cao, Hongyu Lin, Xianpei Han, Jia Zheng, Sirui Wang, Xunliang Cai, Le Sun

In this paper, we propose Credibility-aware Generation (CAG), a universally applicable framework designed to mitigate the impact of flawed information in RAG.

RAG Retrieval

Few-shot Named Entity Recognition via Superposition Concept Discrimination

1 code implementation25 Mar 2024 Jiawei Chen, Hongyu Lin, Xianpei Han, Yaojie Lu, Shanshan Jiang, Bin Dong, Le Sun

Then a superposition instance retriever is applied to retrieve corresponding instances of these superposition concepts from large-scale text corpus.

Active Learning few-shot-ner +4

Meta-Cognitive Analysis: Evaluating Declarative and Procedural Knowledge in Datasets and Large Language Models

1 code implementation14 Mar 2024 Zhuoqun Li, Hongyu Lin, Yaojie Lu, Hao Xiang, Xianpei Han, Le Sun

Declarative knowledge and procedural knowledge are two key parts in meta-cognitive theory, and these two hold significant importance in pre-training and inference of LLMs.

Academically intelligent LLMs are not necessarily socially intelligent

1 code implementation11 Mar 2024 Ruoxi Xu, Hongyu Lin, Xianpei Han, Le Sun, Yingfei Sun

The academic intelligence of large language models (LLMs) has made remarkable progress in recent times, but their social intelligence performance remains unclear.

ShortGPT: Layers in Large Language Models are More Redundant Than You Expect

no code implementations6 Mar 2024 Xin Men, Mingyu Xu, Qingyu Zhang, Bingning Wang, Hongyu Lin, Yaojie Lu, Xianpei Han, WeiPeng Chen

As Large Language Models (LLMs) continue to advance in performance, their size has escalated significantly, with current LLMs containing billions or even trillions of parameters.

Quantization

SoFA: Shielded On-the-fly Alignment via Priority Rule Following

1 code implementation27 Feb 2024 Xinyu Lu, Bowen Yu, Yaojie Lu, Hongyu Lin, Haiyang Yu, Le Sun, Xianpei Han, Yongbin Li

The alignment problem in Large Language Models (LLMs) involves adapting them to the broad spectrum of human values.

Diversity

Executing Natural Language-Described Algorithms with Large Language Models: An Investigation

1 code implementation23 Feb 2024 Xin Zheng, Qiming Zhu, Hongyu Lin, Yaojie Lu, Xianpei Han, Le Sun

In this paper, we seek to examine the capacity of present-day LLMs to comprehend and execute algorithms outlined in natural language.

Natural Language Understanding

Self-Retrieval: Building an Information Retrieval System with One Large Language Model

no code implementations23 Feb 2024 Qiaoyu Tang, Jiawei Chen, Bowen Yu, Yaojie Lu, Cheng Fu, Haiyang Yu, Hongyu Lin, Fei Huang, Ben He, Xianpei Han, Le Sun, Yongbin Li

The rise of large language models (LLMs) has transformed the role of information retrieval (IR) systems in the way to humans accessing information.

Information Retrieval Language Modelling +2

Rule or Story, Which is a Better Commonsense Expression for Talking with Large Language Models?

no code implementations22 Feb 2024 Ning Bian, Xianpei Han, Hongyu Lin, Yaojie Lu, Ben He, Le Sun

Building machines with commonsense has been a longstanding challenge in NLP due to the reporting bias of commonsense rules and the exposure bias of rule-based commonsense reasoning.

AI for social science and social science of AI: A Survey

no code implementations22 Jan 2024 Ruoxi Xu, Yingfei Sun, Mengjie Ren, Shiguang Guo, Ruotong Pan, Hongyu Lin, Le Sun, Xianpei Han

Recent advancements in artificial intelligence, particularly with the emergence of large language models (LLMs), have sparked a rethinking of artificial general intelligence possibilities.

DBCopilot: Scaling Natural Language Querying to Massive Databases

1 code implementation6 Dec 2023 Tianshu Wang, Hongyu Lin, Xianpei Han, Le Sun, Xiaoyang Chen, Hao Wang, Zhenyu Zeng

Text-to-SQL simplifies database interactions by enabling non-experts to convert their natural language (NL) questions into Structured Query Language (SQL) queries.

Navigate Question Generation +2

Mitigating Large Language Model Hallucinations via Autonomous Knowledge Graph-based Retrofitting

no code implementations22 Nov 2023 Xinyan Guan, Yanjiang Liu, Hongyu Lin, Yaojie Lu, Ben He, Xianpei Han, Le Sun

Incorporating factual knowledge in knowledge graph is regarded as a promising approach for mitigating the hallucination of large language models (LLMs).

Hallucination Language Modelling +1

Toward Unified Controllable Text Generation via Regular Expression Instruction

1 code implementation19 Sep 2023 Xin Zheng, Hongyu Lin, Xianpei Han, Le Sun

Controllable text generation is a fundamental aspect of natural language generation, with numerous methods proposed for different constraint types.

In-Context Learning Text Generation

Benchmarking Large Language Models in Retrieval-Augmented Generation

1 code implementation4 Sep 2023 Jiawei Chen, Hongyu Lin, Xianpei Han, Le Sun

In this paper, we systematically investigate the impact of Retrieval-Augmented Generation on large language models.

Benchmarking counterfactual +3

ToolAlpaca: Generalized Tool Learning for Language Models with 3000 Simulated Cases

2 code implementations8 Jun 2023 Qiaoyu Tang, Ziliang Deng, Hongyu Lin, Xianpei Han, Qiao Liang, Boxi Cao, Le Sun

Existing approaches to tool learning have either primarily relied on extremely large language models, such as GPT-4, to attain generalized tool-use abilities in a zero-shot manner, or utilized supervised learning to train limited scopes of tools on compact models.

Learning In-context Learning for Named Entity Recognition

2 code implementations18 May 2023 Jiawei Chen, Yaojie Lu, Hongyu Lin, Jie Lou, Wei Jia, Dai Dai, Hua Wu, Boxi Cao, Xianpei Han, Le Sun

M}$, and a new entity extractor can be implicitly constructed by applying new instruction and demonstrations to PLMs, i. e., $\mathcal{ (\lambda .

Diversity few-shot-ner +5

Retentive or Forgetful? Diving into the Knowledge Memorizing Mechanism of Language Models

no code implementations16 May 2023 Boxi Cao, Qiaoyu Tang, Hongyu Lin, Shanshan Jiang, Bin Dong, Xianpei Han, Jiawei Chen, Tianshu Wang, Le Sun

Memory is one of the most essential cognitive functions serving as a repository of world knowledge and episodes of activities.

World Knowledge

Harvesting Event Schemas from Large Language Models

1 code implementation12 May 2023 Jialong Tang, Hongyu Lin, Zhuoqun Li, Yaojie Lu, Xianpei Han, Le Sun

Event schema provides a conceptual, structural and formal language to represent events and model the world event knowledge.

Diversity

The Life Cycle of Knowledge in Big Language Models: A Survey

1 code implementation14 Mar 2023 Boxi Cao, Hongyu Lin, Xianpei Han, Le Sun

Knowledge plays a critical role in artificial intelligence.

Universal Information Extraction as Unified Semantic Matching

no code implementations9 Jan 2023 Jie Lou, Yaojie Lu, Dai Dai, Wei Jia, Hongyu Lin, Xianpei Han, Le Sun, Hua Wu

Based on this paradigm, we propose to universally model various IE tasks with Unified Semantic Matching (USM) framework, which introduces three unified token linking operations to model the abilities of structuring and conceptualizing.

Diversity

Bridging the Gap between Reality and Ideality of Entity Matching: A Revisiting and Benchmark Re-Construction

no code implementations12 May 2022 Tianshu Wang, Hongyu Lin, Cheng Fu, Xianpei Han, Le Sun, Feiyu Xiong, Hui Chen, Minlong Lu, Xiuwen Zhu

Experimental results demonstrate that the assumptions made in the previous benchmark construction process are not coincidental with the open environment, which conceal the main challenges of the task and therefore significantly overestimate the current progress of entity matching.

Entity Resolution

Few-shot Named Entity Recognition with Self-describing Networks

1 code implementation ACL 2022 Jiawei Chen, Qing Liu, Hongyu Lin, Xianpei Han, Le Sun

In this paper, we propose a self-describing mechanism for few-shot NER, which can effectively leverage illustrative instances and precisely transfer knowledge from external resources by describing both entity types and mentions using a universal concept set.

Few-shot NER Named Entity Recognition

Pre-training to Match for Unified Low-shot Relation Extraction

1 code implementation ACL 2022 Fangchao Liu, Hongyu Lin, Xianpei Han, Boxi Cao, Le Sun

Low-shot relation extraction~(RE) aims to recognize novel relations with very few or even no samples, which is critical in real scenario application.

Meta-Learning Relation +2

Can Prompt Probe Pretrained Language Models? Understanding the Invisible Risks from a Causal View

1 code implementation ACL 2022 Boxi Cao, Hongyu Lin, Xianpei Han, Fangchao Liu, Le Sun

Prompt-based probing has been widely used in evaluating the abilities of pretrained language models (PLMs).

Procedural Text Understanding via Scene-Wise Evolution

no code implementations15 Mar 2022 Jialong Tang, Hongyu Lin, Meng Liao, Yaojie Lu, Xianpei Han, Le Sun, Weijian Xie, Jin Xu

In this paper, we propose a new \textbf{scene-wise} paradigm for procedural text understanding, which jointly tracks states of all entities in a scene-by-scene manner.

Procedural Text Understanding

Fine-grained Entity Typing via Label Reasoning

no code implementations EMNLP 2021 Qing Liu, Hongyu Lin, Xinyan Xiao, Xianpei Han, Le Sun, Hua Wu

Conventional entity typing approaches are based on independent classification paradigms, which make them difficult to recognize inter-dependent, long-tailed and fine-grained entity types.

Attribute Entity Typing

Honey or Poison? Solving the Trigger Curse in Few-shot Event Detection via Causal Intervention

1 code implementation EMNLP 2021 Jiawei Chen, Hongyu Lin, Xianpei Han, Le Sun

In this paper, we identify and solve the trigger curse problem in few-shot event detection (FSED) from a causal view.

Event Detection

Denoising Distantly Supervised Named Entity Recognition via a Hypergeometric Probabilistic Model

1 code implementation17 Jun 2021 Wenkai Zhang, Hongyu Lin, Xianpei Han, Le Sun, Huidan Liu, Zhicheng Wei, Nicholas Jing Yuan

Specifically, during neural network training, we naturally model the noise samples in each batch following a hypergeometric distribution parameterized by the noise-rate.

Denoising named-entity-recognition +2

Element Intervention for Open Relation Extraction

no code implementations ACL 2021 Fangchao Liu, Lingyong Yan, Hongyu Lin, Xianpei Han, Le Sun

Open relation extraction aims to cluster relation instances referring to the same underlying relation, which is a critical step for general relation extraction.

Relation Relation Extraction

Knowledgeable or Educated Guess? Revisiting Language Models as Knowledge Bases

1 code implementation ACL 2021 Boxi Cao, Hongyu Lin, Xianpei Han, Le Sun, Lingyong Yan, Meng Liao, Tong Xue, Jin Xu

Previous literatures show that pre-trained masked language models (MLMs) such as BERT can achieve competitive factual knowledge extraction performance on some datasets, indicating that MLMs can potentially be a reliable knowledge source.

End-to-End Neural Event Coreference Resolution

1 code implementation17 Sep 2020 Yaojie Lu, Hongyu Lin, Jialong Tang, Xianpei Han, Le Sun

Traditional event coreference systems usually rely on pipeline framework and hand-crafted features, which often face error propagation problem and have poor generalization ability.

coreference-resolution Event Coreference Resolution +1

ISCAS at SemEval-2020 Task 5: Pre-trained Transformers for Counterfactual Statement Modeling

1 code implementation SEMEVAL 2020 Yaojie Lu, Annan Li, Hongyu Lin, Xianpei Han, Le Sun

ISCAS participated in two subtasks of SemEval 2020 Task 5: detecting counterfactual statements and detecting antecedent and consequence.

counterfactual Question Answering

A Rigorous Study on Named Entity Recognition: Can Fine-tuning Pretrained Model Lead to the Promised Land?

no code implementations EMNLP 2020 Hongyu Lin, Yaojie Lu, Jialong Tang, Xianpei Han, Le Sun, Zhicheng Wei, Nicholas Jing Yuan

Specifically, we erase name regularity, mention coverage and context diversity respectively from the benchmarks, in order to explore their impact on the generalization ability of models.

Diversity named-entity-recognition +2

Gazetteer-Enhanced Attentive Neural Networks for Named Entity Recognition

no code implementations IJCNLP 2019 Hongyu Lin, Yaojie Lu, Xianpei Han, Le Sun, Bin Dong, Shanshan Jiang

Current region-based NER models only rely on fully-annotated training data to learn effective region encoder, which often face the training data bottleneck.

named-entity-recognition Named Entity Recognition +1

Distilling Discrimination and Generalization Knowledge for Event Detection via Delta-Representation Learning

1 code implementation ACL 2019 Yaojie Lu, Hongyu Lin, Xianpei Han, Le Sun

Event detection systems rely on discrimination knowledge to distinguish ambiguous trigger words and generalization knowledge to detect unseen/sparse trigger words.

Event Detection Representation Learning

Cost-sensitive Regularization for Label Confusion-aware Event Detection

1 code implementation ACL 2019 Hongyu Lin, Yaojie Lu, Xianpei Han, Le Sun

In supervised event detection, most of the mislabeling occurs between a small number of confusing type pairs, including trigger-NIL pairs and sibling sub-types of the same coarse type.

Event Detection Vocal Bursts Type Prediction

Sequence-to-Nuggets: Nested Entity Mention Detection via Anchor-Region Networks

1 code implementation ACL 2019 Hongyu Lin, Yaojie Lu, Xianpei Han, Le Sun

In this paper, we propose to resolve this problem by modeling and leveraging the head-driven phrase structures of entity mentions, i. e., although a mention can nest other mentions, they will not share the same head word.

NER Nested Mention Recognition +1

Nugget Proposal Networks for Chinese Event Detection

1 code implementation ACL 2018 Hongyu Lin, Yaojie Lu, Xianpei Han, Le Sun

Neural network based models commonly regard event detection as a word-wise classification task, which suffer from the mismatch problem between words and event triggers, especially in languages without natural word delimiters such as Chinese.

Event Detection General Classification

Adaptive Scaling for Sparse Detection in Information Extraction

1 code implementation ACL 2018 Hongyu Lin, Yaojie Lu, Xianpei Han, Le Sun

This paper focuses on detection tasks in information extraction, where positive instances are sparsely distributed and models are usually evaluated using F-measure on positive classes.

Reasoning with Heterogeneous Knowledge for Commonsense Machine Comprehension

no code implementations EMNLP 2017 Hongyu Lin, Le Sun, Xianpei Han

Then we propose a multi-knowledge reasoning model, which selects inference rules for a specific reasoning context using attention mechanism, and reasons by summarizing all valid inference rules.

Natural Language Understanding Reading Comprehension +1

Cannot find the paper you are looking for? You can Submit a new open access paper.