1 code implementation • SemEval (NAACL) 2022 • Xinyu Lu, Mengjie Ren, Yaojie Lu, Hongyu Lin
ISCAS participated in both sub-tasks in SemEval-2022 Task 10: Structured Sentiment competition.
no code implementations • LREC 2022 • Zheng Chen, Hongyu Lin
Cross-lingual summarization, which produces the summary in one language from a given source document in another language, could be extremely helpful for humans to obtain information across the world.
Abstractive Text Summarization Cross-Lingual Abstractive Summarization
no code implementations • 11 Oct 2024 • Zhuoqun Li, Xuanang Chen, Haiyang Yu, Hongyu Lin, Yaojie Lu, Qiaoyu Tang, Fei Huang, Xianpei Han, Le Sun, Yongbin Li
Retrieval-augmented generation (RAG) is a key means to effectively enhance large language models (LLMs) in many knowledge-based tasks.
no code implementations • 10 Oct 2024 • Jiasheng Zheng, Hongyu Lin, Boxi Cao, Meng Liao, Yaojie Lu, Xianpei Han, Le Sun
Evaluating the quality of documents is essential for filtering valuable content from the current massive amount of information.
1 code implementation • 9 Oct 2024 • Zichao Li, Shaojie He, Meng Liao, Xuanang Chen, Yaojie Lu, Hongyu Lin, Yanxiong Lu, Xianpei Han, Le Sun
Document logical structuring aims to extract the underlying hierarchical structure of documents, which is crucial for document intelligence.
no code implementations • 8 Oct 2024 • Xueru Wen, Jie Lou, Yaojie Lu, Hongyu Lin, Xing Yu, Xinyu Lu, Ben He, Xianpei Han, Debing Zhang, Le Sun
Although this method is straightforward and widely adopted, the relationship between RM accuracy and downstream policy performance remains under-explored.
no code implementations • 8 Sep 2024 • Zichao Li, Aizier Abulaiti, Yaojie Lu, Xuanang Chen, Jia Zheng, Hongyu Lin, Xianpei Han, Le Sun
Document Structured Extraction (DSE) aims to extract structured content from raw documents.
no code implementations • 29 Aug 2024 • Xin Zheng, Jie Lou, Boxi Cao, Xueru Wen, Yuqiu Ji, Hongyu Lin, Yaojie Lu, Xianpei Han, Debing Zhang, Le Sun
Self-critic has become a crucial mechanism for enhancing the reasoning performance of LLMs.
no code implementations • 23 Aug 2024 • Ruiyang Xu, Jialun Cao, Yaojie Lu, Hongyu Lin, Xianpei Han, Ben He, Shing-Chi Cheung, Le Sun
However, there is an unignorable programming language bias in existing code benchmarks -- over 95% code generation benchmarks are dominated by Python, leaving the LLMs' capabilities in other programming languages such as Java and C/C++ unknown.
no code implementations • 23 Aug 2024 • Qiming Zhu, Jialun Cao, Yaojie Lu, Hongyu Lin, Xianpei Han, Le Sun, Shing-Chi Cheung
We notice that LLMs are generally good at computation tasks while falling short on cryptography and system coding tasks.
1 code implementation • 20 Aug 2024 • Shu Chen, Xinyan Guan, Yaojie Lu, Hongyu Lin, Xianpei Han, Le Sun
Manually annotating instruction data for large language models is difficult, costly, and hard to scale.
1 code implementation • 6 Aug 2024 • Boxi Cao, Mengjie Ren, Hongyu Lin, Xianpei Han, Feng Zhang, Junfeng Zhan, Le Sun
Evaluation is the baton for the development of large language models.
1 code implementation • 16 Jul 2024 • Jiasheng Zheng, Boxi Cao, Zhengzhao Ma, Ruotong Pan, Hongyu Lin, Yaojie Lu, Xianpei Han, Le Sun
In recent years, researchers have proposed numerous benchmarks to evaluate the impressive coding capabilities of large language models (LLMs).
no code implementations • 18 Jun 2024 • Xueru Wen, Xinyu Lu, Xinyan Guan, Yaojie Lu, Hongyu Lin, Ben He, Xianpei Han, Le Sun
Previous learning-based methods focus on detecting knowledge boundaries and finetuning models with instance-level feedback, but they suffer from inaccurate signals due to off-policy data sampling and coarse-grained feedback.
1 code implementation • 5 Jun 2024 • Shiguang Guo, Ziliang Deng, Hongyu Lin, Yaojie Lu, Xianpei Han, Le Sun
In this paper, we propose a new planning task--open grounded planning.
1 code implementation • 3 Jun 2024 • Boxi Cao, Keming Lu, Xinyu Lu, Jiawei Chen, Mengjie Ren, Hao Xiang, Peilin Liu, Yaojie Lu, Ben He, Xianpei Han, Le Sun, Hongyu Lin, Bowen Yu
Alignment is the most critical step in building large language models (LLMs) that meet human needs.
1 code implementation • 27 May 2024 • Tianshu Wang, Xiaoyang Chen, Hongyu Lin, Xuanang Chen, Xianpei Han, Hao Wang, Zhenyu Zeng, Le Sun
Based on our findings, we further design a compound entity matching framework (ComEM) that leverages the composition of multiple strategies and LLMs.
no code implementations • 23 May 2024 • Xin Men, Mingyu Xu, Bingning Wang, Qingyu Zhang, Hongyu Lin, Xianpei Han, WeiPeng Chen
We revisit the role of RoPE in LLMs and propose a novel property of long-term decay, we derive that the \textit{base of RoPE bounds context length}: there is an absolute lower bound for the base value to obtain certain context length capability.
1 code implementation • 24 Apr 2024 • Zhuoqun Li, Hongyu Lin, Tianshu Wang, Boxi Cao, Yaojie Lu, Weixiang Zhou, Hao Wang, Zhenyu Zeng, Le Sun, Xianpei Han
Linking a claim to grounded references is a critical ability to fulfill human demands for authentic and reliable information.
2 code implementations • 23 Apr 2024 • Tianshu Wang, Hongyu Lin, Xianpei Han, Xiaoyang Chen, Boxi Cao, Le Sun
Blocking is a critical step in entity resolution, and the emergence of neural network-based representation models has led to the development of dense blocking as a promising approach for exploring deep semantics in blocking.
1 code implementation • 16 Apr 2024 • Xiaoyang Chen, Ben He, Hongyu Lin, Xianpei Han, Tianshu Wang, Boxi Cao, Le Sun, Yingfei Sun
The practice of Retrieval-Augmented Generation (RAG), which integrates Large Language Models (LLMs) with retrieval systems, has become increasingly prevalent.
1 code implementation • 10 Apr 2024 • Ruotong Pan, Boxi Cao, Hongyu Lin, Xianpei Han, Jia Zheng, Sirui Wang, Xunliang Cai, Le Sun
In this paper, we propose Credibility-aware Generation (CAG), a universally applicable framework designed to mitigate the impact of flawed information in RAG.
1 code implementation • 25 Mar 2024 • Jiawei Chen, Hongyu Lin, Xianpei Han, Yaojie Lu, Shanshan Jiang, Bin Dong, Le Sun
Then a superposition instance retriever is applied to retrieve corresponding instances of these superposition concepts from large-scale text corpus.
1 code implementation • 14 Mar 2024 • Zhuoqun Li, Hongyu Lin, Yaojie Lu, Hao Xiang, Xianpei Han, Le Sun
Declarative knowledge and procedural knowledge are two key parts in meta-cognitive theory, and these two hold significant importance in pre-training and inference of LLMs.
1 code implementation • 11 Mar 2024 • Ruoxi Xu, Hongyu Lin, Xianpei Han, Le Sun, Yingfei Sun
The academic intelligence of large language models (LLMs) has made remarkable progress in recent times, but their social intelligence performance remains unclear.
no code implementations • 6 Mar 2024 • Xin Men, Mingyu Xu, Qingyu Zhang, Bingning Wang, Hongyu Lin, Yaojie Lu, Xianpei Han, WeiPeng Chen
As Large Language Models (LLMs) continue to advance in performance, their size has escalated significantly, with current LLMs containing billions or even trillions of parameters.
1 code implementation • 28 Feb 2024 • Mengjie Ren, Boxi Cao, Hongyu Lin, Cao Liu, Xianpei Han, Ke Zeng, Guanglu Wan, Xunliang Cai, Le Sun
Instruction Fine-tuning~(IFT) is a critical phase in building large language models~(LLMs).
1 code implementation • 27 Feb 2024 • Xinyu Lu, Bowen Yu, Yaojie Lu, Hongyu Lin, Haiyang Yu, Le Sun, Xianpei Han, Yongbin Li
The alignment problem in Large Language Models (LLMs) involves adapting them to the broad spectrum of human values.
1 code implementation • 23 Feb 2024 • Xin Zheng, Qiming Zhu, Hongyu Lin, Yaojie Lu, Xianpei Han, Le Sun
In this paper, we seek to examine the capacity of present-day LLMs to comprehend and execute algorithms outlined in natural language.
no code implementations • 23 Feb 2024 • Qiaoyu Tang, Jiawei Chen, Bowen Yu, Yaojie Lu, Cheng Fu, Haiyang Yu, Hongyu Lin, Fei Huang, Ben He, Xianpei Han, Le Sun, Yongbin Li
The rise of large language models (LLMs) has transformed the role of information retrieval (IR) systems in the way to humans accessing information.
no code implementations • 22 Feb 2024 • Ning Bian, Xianpei Han, Hongyu Lin, Yaojie Lu, Ben He, Le Sun
Building machines with commonsense has been a longstanding challenge in NLP due to the reporting bias of commonsense rules and the exposure bias of rule-based commonsense reasoning.
no code implementations • 22 Jan 2024 • Ruoxi Xu, Yingfei Sun, Mengjie Ren, Shiguang Guo, Ruotong Pan, Hongyu Lin, Le Sun, Xianpei Han
Recent advancements in artificial intelligence, particularly with the emergence of large language models (LLMs), have sparked a rethinking of artificial general intelligence possibilities.
1 code implementation • 6 Dec 2023 • Tianshu Wang, Hongyu Lin, Xianpei Han, Le Sun, Xiaoyang Chen, Hao Wang, Zhenyu Zeng
Text-to-SQL simplifies database interactions by enabling non-experts to convert their natural language (NL) questions into Structured Query Language (SQL) queries.
no code implementations • 22 Nov 2023 • Xinyan Guan, Yanjiang Liu, Hongyu Lin, Yaojie Lu, Ben He, Xianpei Han, Le Sun
Incorporating factual knowledge in knowledge graph is regarded as a promising approach for mitigating the hallucination of large language models (LLMs).
1 code implementation • 19 Sep 2023 • Xin Zheng, Hongyu Lin, Xianpei Han, Le Sun
Controllable text generation is a fundamental aspect of natural language generation, with numerous methods proposed for different constraint types.
1 code implementation • 4 Sep 2023 • Jiawei Chen, Hongyu Lin, Xianpei Han, Le Sun
In this paper, we systematically investigate the impact of Retrieval-Augmented Generation on large language models.
2 code implementations • 8 Jun 2023 • Qiaoyu Tang, Ziliang Deng, Hongyu Lin, Xianpei Han, Qiao Liang, Boxi Cao, Le Sun
Existing approaches to tool learning have either primarily relied on extremely large language models, such as GPT-4, to attain generalized tool-use abilities in a zero-shot manner, or utilized supervised learning to train limited scopes of tools on compact models.
2 code implementations • 18 May 2023 • Jiawei Chen, Yaojie Lu, Hongyu Lin, Jie Lou, Wei Jia, Dai Dai, Hua Wu, Boxi Cao, Xianpei Han, Le Sun
M}$, and a new entity extractor can be implicitly constructed by applying new instruction and demonstrations to PLMs, i. e., $\mathcal{ (\lambda .
no code implementations • 16 May 2023 • Ruoxi Xu, Hongyu Lin, Xinyan Guan, Xianpei Han, Yingfei Sun, Le Sun
Understanding documents is central to many real-world tasks but remains a challenging topic.
no code implementations • 16 May 2023 • Boxi Cao, Qiaoyu Tang, Hongyu Lin, Shanshan Jiang, Bin Dong, Xianpei Han, Jiawei Chen, Tianshu Wang, Le Sun
Memory is one of the most essential cognitive functions serving as a repository of world knowledge and episodes of activities.
1 code implementation • 12 May 2023 • Jialong Tang, Hongyu Lin, Zhuoqun Li, Yaojie Lu, Xianpei Han, Le Sun
Event schema provides a conceptual, structural and formal language to represent events and model the world event knowledge.
no code implementations • 8 May 2023 • Ning Bian, Hongyu Lin, Peilin Liu, Yaojie Lu, Chunkang Zhang, Ben He, Xianpei Han, Le Sun
LLMs, as AI agents, can observe external information, which shapes their cognition and behaviors.
no code implementations • 29 Mar 2023 • Ning Bian, Xianpei Han, Le Sun, Hongyu Lin, Yaojie Lu, Ben He, Shanshan Jiang, Bin Dong
(4) Can ChatGPT effectively leverage commonsense for answering questions?
1 code implementation • 14 Mar 2023 • Boxi Cao, Hongyu Lin, Xianpei Han, Le Sun
Knowledge plays a critical role in artificial intelligence.
no code implementations • 9 Jan 2023 • Jie Lou, Yaojie Lu, Dai Dai, Wei Jia, Hongyu Lin, Xianpei Han, Le Sun, Hua Wu
Based on this paradigm, we propose to universally model various IE tasks with Unified Semantic Matching (USM) framework, which introduces three unified token linking operations to model the abilities of structuring and conceptualizing.
no code implementations • 12 May 2022 • Tianshu Wang, Hongyu Lin, Cheng Fu, Xianpei Han, Le Sun, Feiyu Xiong, Hui Chen, Minlong Lu, Xiuwen Zhu
Experimental results demonstrate that the assumptions made in the previous benchmark construction process are not coincidental with the open environment, which conceal the main challenges of the task and therefore significantly overestimate the current progress of entity matching.
1 code implementation • ACL 2022 • Jiawei Chen, Qing Liu, Hongyu Lin, Xianpei Han, Le Sun
In this paper, we propose a self-describing mechanism for few-shot NER, which can effectively leverage illustrative instances and precisely transfer knowledge from external resources by describing both entity types and mentions using a universal concept set.
2 code implementations • ACL 2022 • Yaojie Lu, Qing Liu, Dai Dai, Xinyan Xiao, Hongyu Lin, Xianpei Han, Le Sun, Hua Wu
Information extraction suffers from its varying targets, heterogeneous structures, and demand-specific schemas.
Ranked #4 on Aspect-Based Sentiment Analysis (ABSA) on ASTE (using extra training data)
1 code implementation • ACL 2022 • Fangchao Liu, Hongyu Lin, Xianpei Han, Boxi Cao, Le Sun
Low-shot relation extraction~(RE) aims to recognize novel relations with very few or even no samples, which is critical in real scenario application.
no code implementations • Findings (ACL) 2022 • Ruoxi Xu, Hongyu Lin, Meng Liao, Xianpei Han, Jin Xu, Wei Tan, Yingfei Sun, Le Sun
Events are considered as the fundamental building blocks of the world.
1 code implementation • ACL 2022 • Boxi Cao, Hongyu Lin, Xianpei Han, Fangchao Liu, Le Sun
Prompt-based probing has been widely used in evaluating the abilities of pretrained language models (PLMs).
no code implementations • 15 Mar 2022 • Jialong Tang, Hongyu Lin, Meng Liao, Yaojie Lu, Xianpei Han, Le Sun, Weijian Xie, Jin Xu
In this paper, we propose a new \textbf{scene-wise} paradigm for procedural text understanding, which jointly tracks states of all entities in a scene-by-scene manner.
no code implementations • EMNLP 2021 • Qing Liu, Hongyu Lin, Xinyan Xiao, Xianpei Han, Le Sun, Hua Wu
Conventional entity typing approaches are based on independent classification paradigms, which make them difficult to recognize inter-dependent, long-tailed and fine-grained entity types.
Ranked #8 on Entity Typing on Open Entity
1 code implementation • EMNLP 2021 • Jiawei Chen, Hongyu Lin, Xianpei Han, Le Sun
In this paper, we identify and solve the trigger curse problem in few-shot event detection (FSED) from a causal view.
no code implementations • 19 Jul 2021 • Ning Bian, Xianpei Han, Bo Chen, Hongyu Lin, Ben He, Le Sun
In this paper, we propose a new framework for unsupervised MRC.
1 code implementation • ACL 2021 • Yaojie Lu, Hongyu Lin, Jin Xu, Xianpei Han, Jialong Tang, Annan Li, Le Sun, Meng Liao, Shaoyi Chen
Event extraction is challenging due to the complex structure of event records and the semantic gap between text and event.
Ranked #3 on Event Extraction on ACE2005
1 code implementation • ACL 2021 • Wenkai Zhang, Hongyu Lin, Xianpei Han, Le Sun
Distant supervision tackles the data bottleneck in NER by automatically generating training instances via dictionary matching.
1 code implementation • 17 Jun 2021 • Wenkai Zhang, Hongyu Lin, Xianpei Han, Le Sun, Huidan Liu, Zhicheng Wei, Nicholas Jing Yuan
Specifically, during neural network training, we naturally model the noise samples in each batch following a hypergeometric distribution parameterized by the noise-rate.
no code implementations • ACL 2021 • Fangchao Liu, Lingyong Yan, Hongyu Lin, Xianpei Han, Le Sun
Open relation extraction aims to cluster relation instances referring to the same underlying relation, which is a critical step for general relation extraction.
1 code implementation • ACL 2021 • Boxi Cao, Hongyu Lin, Xianpei Han, Le Sun, Lingyong Yan, Meng Liao, Tong Xue, Jin Xu
Previous literatures show that pre-trained masked language models (MLMs) such as BERT can achieve competitive factual knowledge extraction performance on some datasets, indicating that MLMs can potentially be a reliable knowledge source.
1 code implementation • ACL 2021 • Jialong Tang, Hongyu Lin, Meng Liao, Yaojie Lu, Xianpei Han, Le Sun, Weijian Xie, Jin Xu
Current event-centric knowledge graphs highly rely on explicit connectives to mine relations between events.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Jialong Tang, Yaojie Lu, Hongyu Lin, Xianpei Han, Le Sun, Xinyan Xiao, Hua Wu
One of the biggest bottlenecks in building accurate, high coverage neural open IE systems is the need for large labelled corpora.
1 code implementation • 17 Sep 2020 • Yaojie Lu, Hongyu Lin, Jialong Tang, Xianpei Han, Le Sun
Traditional event coreference systems usually rely on pipeline framework and hand-crafted features, which often face error propagation problem and have poor generalization ability.
1 code implementation • SEMEVAL 2020 • Yaojie Lu, Annan Li, Hongyu Lin, Xianpei Han, Le Sun
ISCAS participated in two subtasks of SemEval 2020 Task 5: detecting counterfactual statements and detecting antecedent and consequence.
no code implementations • EMNLP 2020 • Hongyu Lin, Yaojie Lu, Jialong Tang, Xianpei Han, Le Sun, Zhicheng Wei, Nicholas Jing Yuan
Specifically, we erase name regularity, mention coverage and context diversity respectively from the benchmarks, in order to explore their impact on the generalization ability of models.
no code implementations • IJCNLP 2019 • Hongyu Lin, Yaojie Lu, Xianpei Han, Le Sun, Bin Dong, Shanshan Jiang
Current region-based NER models only rely on fully-annotated training data to learn effective region encoder, which often face the training data bottleneck.
1 code implementation • ACL 2019 • Yaojie Lu, Hongyu Lin, Xianpei Han, Le Sun
Event detection systems rely on discrimination knowledge to distinguish ambiguous trigger words and generalization knowledge to detect unseen/sparse trigger words.
1 code implementation • ACL 2019 • Hongyu Lin, Yaojie Lu, Xianpei Han, Le Sun
In supervised event detection, most of the mislabeling occurs between a small number of confusing type pairs, including trigger-NIL pairs and sibling sub-types of the same coarse type.
1 code implementation • ACL 2019 • Hongyu Lin, Yaojie Lu, Xianpei Han, Le Sun
In this paper, we propose to resolve this problem by modeling and leveraging the head-driven phrase structures of entity mentions, i. e., although a mention can nest other mentions, they will not share the same head word.
Ranked #7 on Nested Mention Recognition on ACE 2005
1 code implementation • ACL 2018 • Hongyu Lin, Yaojie Lu, Xianpei Han, Le Sun
Neural network based models commonly regard event detection as a word-wise classification task, which suffer from the mismatch problem between words and event triggers, especially in languages without natural word delimiters such as Chinese.
1 code implementation • ACL 2018 • Hongyu Lin, Yaojie Lu, Xianpei Han, Le Sun
This paper focuses on detection tasks in information extraction, where positive instances are sparsely distributed and models are usually evaluated using F-measure on positive classes.
no code implementations • EMNLP 2017 • Hongyu Lin, Le Sun, Xianpei Han
Then we propose a multi-knowledge reasoning model, which selects inference rules for a specific reasoning context using attention mechanism, and reasons by summarizing all valid inference rules.