no code implementations • EMNLP 2021 • Bowen Yu, Yucheng Wang, Tingwen Liu, Hongsong Zhu, Limin Sun, Bin Wang
However, the popular OpenIE systems usually output facts sequentially in the way of predicting the next fact conditioned on the previous decoded ones, which enforce an unnecessary order on the facts and involve the error accumulation between autoregressive steps.
3 code implementations • ACL 2022 • Yanzeng Li, Jiangxia Cao, Xin Cong, Zhenyu Zhang, Bowen Yu, Hongsong Zhu, Tingwen Liu
Chinese pre-trained language models usually exploit contextual character information to learn representations, while ignoring the linguistics knowledge, e. g., word and sentence information.
no code implementations • 7 Dec 2024 • Yilong Chen, Junyuan Shang, Zhengyu Zhang, Jiawei Sheng, Tingwen Liu, Shuohuan Wang, Yu Sun, Hua Wu, Haifeng Wang
MOHD offers a new perspective for scaling the model, showcasing the potential of hidden dimension sparsity to boost efficiency
no code implementations • 17 Oct 2024 • Chuanyu Tang, Yilong Chen, Zhenyu Zhang, Junyuan Shang, Wenyuan Zhang, Yong Huang, Tingwen Liu
Low-Rank Adaptation (LoRA) drives research to align its performance with full fine-tuning.
1 code implementation • 18 Sep 2024 • Wenyuan Zhang, Jiawei Sheng, Shuaiyi Nie, Zefeng Zhang, Xinghua Zhang, Yongquan He, Tingwen Liu
Large language model (LLM) role-playing has gained widespread attention, where the authentic character knowledge is crucial for constructing realistic LLM role-playing agents.
2 code implementations • 7 Aug 2024 • Yilong Chen, Guoxia Wang, Junyuan Shang, Shiyao Cui, Zhenyu Zhang, Tingwen Liu, Shuohuan Wang, Yu Sun, dianhai yu, Hua Wu
Large Language Models (LLMs) have ignited an innovative surge of AI applications, marking a new era of exciting possibilities equipped with extended context windows.
1 code implementation • 29 Jul 2024 • Taoyu Su, Xinghua Zhang, Jiawei Sheng, Zhenyu Zhang, Tingwen Liu
Other studies refine each uni-modal information with graph structures, but may introduce unnecessary relations in specific modalities.
1 code implementation • 27 Jul 2024 • Taoyu Su, Jiawei Sheng, Shicheng Wang, Xinghua Zhang, Hongbo Xu, Tingwen Liu
To this end, we explore variational information bottleneck for multi-modal entity alignment (IBMEA), which emphasizes the alignment-relevant information and suppresses the alignment-irrelevant information in generating entity representations.
1 code implementation • 4 Jun 2024 • Zefeng Zhang, Jiawei Sheng, Chuang Zhang, Yunzhi Liang, Wenyuan Zhang, Siqi Wang, Tingwen Liu
Thereby, we exploit the correlation between multimodal features to enhance multimodal fusion, and the correlation between mentions and entities to enhance fine-grained matching.
no code implementations • 3 Jun 2024 • Yilong Chen, Linhao Zhang, Junyuan Shang, Zhenyu Zhang, Tingwen Liu, Shuohuan Wang, Yu Sun
Large language models (LLMs) with billions of parameters demonstrate impressive performance.
no code implementations • 23 Jan 2024 • XiaoDong Li, Jiawei Sheng, Jiangxia Cao, Wenyuan Zhang, Quangang Li, Tingwen Liu
Cross-domain recommendation (CDR) has been proven as a promising way to tackle the user cold-start problem, which aims to make recommendations for users in the target domain by transferring the user preference derived from the source domain.
1 code implementation • 12 Jan 2024 • Wenyuan Zhang, Xinghua Zhang, Shiyao Cui, Kun Huang, Xuebin Wang, Tingwen Liu
Aspect sentiment quad prediction (ASQP) aims to predict the quad sentiment elements for a given sentence, which is a critical task in the field of aspect-based sentiment analysis.
1 code implementation • 30 Nov 2023 • Shiyao Cui, Zhenyu Zhang, Yilong Chen, Wenyuan Zhang, Tianyun Liu, Siqi Wang, Tingwen Liu
The widespread of generative artificial intelligence has heightened concerns about the potential harms posed by AI-generated texts, primarily stemming from factoid, unfair, and toxic content.
no code implementations • 4 Aug 2023 • Shiyao Cui, Xin Cong, Jiawei Sheng, Xuebin Wang, Tingwen Liu, Jinqiao Shi
In this paper, we regard public pre-trained language models as knowledge bases and automatically mine the script-related knowledge via prompt-learning.
1 code implementation • 3 Aug 2023 • Xinghua Zhang, Bowen Yu, Haiyang Yu, Yangyu Lv, Tingwen Liu, Fei Huang, Hongbo Xu, Yongbin Li
Each perspective corresponds to the role of a specific LLM neuron in the first layer.
no code implementations • 18 Jun 2023 • Xin Cong. Bowen Yu, Mengcheng Fang, Tingwen Liu, Haiyang Yu, Zhongkai Hu, Fei Huang, Yongbin Li, Bin Wang
Inspired by the fact that large amount of knowledge are stored in the pretrained language models~(PLM) and can be retrieved explicitly, in this paper, we propose MetaRetriever to retrieve task-specific knowledge from PLMs to enhance universal IE.
no code implementations • 20 Apr 2023 • Gehang Zhang, Bowen Yu, Jiangxia Cao, Xinghua Zhang, Jiawei Sheng, Chuan Zhou, Tingwen Liu
Graph contrastive learning (GCL) has recently achieved substantial advancements.
1 code implementation • 8 Apr 2023 • Jiangxia Cao, Xin Cong, Jiawei Sheng, Tingwen Liu, Bin Wang
Cross-Domain Sequential Recommendation (CDSR) aims to predict future interactions based on user's historical sequential interactions from multiple domains.
no code implementations • 5 Apr 2023 • Shiyao Cui, Jiangxia Cao, Xin Cong, Jiawei Sheng, Quangang Li, Tingwen Liu, Jinqiao Shi
For the first issue, a refinement-regularizer probes the information-bottleneck principle to balance the predictive evidence and noisy information, yielding expressive representations for prediction.
1 code implementation • COLING 2022 • Shiyao Cui, Jiawei Sheng, Xin Cong, Quangang Li, Tingwen Liu, Jinqiao Shi
Event Causality Identification (ECI), which aims to detect whether a causality relation exists between two given textual events, is an important task for event causality understanding.
1 code implementation • Conference on Empirical Methods in Natural Language Processing 2022 • Mengxiao Song, Bowen Yu, Li Quangang, Wang Yubin, Tingwen Liu, Hongbo Xu
To be specific, an intent-slot co-occurrence graph is constructed based on the entire training corpus to globally discover correlation between intents and slots.
Ranked #7 on Slot Filling on MixATIS
no code implementations • 29 Nov 2022 • Bowen Yu, Zhenyu Zhang, Jingyang Li, Haiyang Yu, Tingwen Liu, Jian Sun, Yongbin Li, Bin Wang
Open Information Extraction (OpenIE) facilitates the open-domain discovery of textual facts.
no code implementations • 14 Jul 2022 • Zhenyu Zhang, Bowen Yu, Haiyang Yu, Tingwen Liu, Cheng Fu, Jingyang Li, Chengguang Tang, Jian Sun, Yongbin Li
In this paper, we propose a Layout-aware document-level Information Extraction dataset, LIE, to facilitate the study of extracting both structural and semantic knowledge from visually rich documents (VRDs), so as to generate accurate responses in dialogue systems.
1 code implementation • SIGIR 2022 • Xin Cong, Jiawei Sheng, Shiyao Cui, Bowen Yu, Tingwen Liu, Bin Wang
To instantiate this strategy, we further propose a model, RelATE, which builds a dual-level attention to aggregate relationrelevant information to detect the relation occurrence and utilizes the annotated samples of the detected relations to extract the corresponding head/tail entities.
1 code implementation • 31 Mar 2022 • Jiangxia Cao, Jiawei Sheng, Xin Cong, Tingwen Liu, Bin Wang
As a promising way, Cross-Domain Recommendation (CDR) has attracted a surge of interest, which aims to transfer the user preferences observed in the source domain to make recommendations in the target domain.
no code implementations • 7 Feb 2022 • Shiyao Cui, Xin Cong, Bowen Yu, Tingwen Liu, Yucheng Wang, Jinqiao Shi
Meanwhile, rough reading is explored in a multi-round manner to discover undetected events, thus the multi-events problem is handled.
1 code implementation • EMNLP 2021 • Xinghua Zhang, Bowen Yu, Tingwen Liu, Zhenyu Zhang, Jiawei Sheng, Mengge Xue, Hongbo Xu
Distantly supervised named entity recognition (DS-NER) efficiently reduces labor costs but meanwhile intrinsically suffers from the label noise due to the strong assumption of distant supervision.
1 code implementation • 8 Jul 2021 • Jiangxia Cao, Xixun Lin, Xin Cong, Shu Guo, Hengzhu Tang, Tingwen Liu, Bin Wang
A temporal interaction network consists of a series of chronological interactions between users and items.
1 code implementation • Findings (ACL) 2021 • Jiawei Sheng, Shu Guo, Bowen Yu, Qian Li, Yiming Hei, Lihong Wang, Tingwen Liu, Hongbo Xu
Event extraction (EE) is a crucial information extraction task that aims to extract event information in texts.
1 code implementation • ACL 2021 • Yucheng Wang, Bowen Yu, Hongsong Zhu, Tingwen Liu, Nan Yu, Limin Sun
Named entity recognition (NER) remains challenging when entity mentions can be discontinuous.
no code implementations • NAACL 2021 • Yanzeng Li, Bowen Yu, Li Quangang, Tingwen Liu
In this paper, we introduce FITAnnotator, a generic web-based tool for efficient text annotation.
1 code implementation • 10 Dec 2020 • Jiangxia Cao, Xixun Lin, Shu Guo, Luchen Liu, Tingwen Liu, Bin Wang
However, the global properties of bipartite graph, including community structures of homogeneous nodes and long-range dependencies of heterogeneous nodes, are not well preserved.
1 code implementation • Findings (ACL) 2021 • Xin Cong, Shiyao Cui, Bowen Yu, Tingwen Liu, Yubin Wang, Bin Wang
Event detection tends to struggle when it needs to recognize novel event types with a few samples.
no code implementations • 3 Dec 2020 • Shiyao Cui, Bowen Yu, Xin Cong, Tingwen Liu, Quangang Li, Jinqiao Shi
A heterogeneous graph attention networks is then introduced to propagate relational message and enrich information interaction.
no code implementations • COLING 2020 • Bowen Yu, Xue Mengge, Zhenyu Zhang, Tingwen Liu, Wang Yubin, Bin Wang
Dependency trees have been shown to be effective in capturing long-range relations between target entities.
no code implementations • COLING 2020 • Xue Mengge, Bowen Yu, Tingwen Liu, Yue Zhang, Erli Meng, Bin Wang
Incorporating lexicons into character-level Chinese NER by lattices is proven effective to exploitrich word boundary information.
no code implementations • COLING 2020 • Zhenyu Zhang, Bowen Yu, Xiaobo Shu, Tingwen Liu, Hengzhu Tang, Wang Yubin, Li Guo
Document-level relation extraction (RE) poses new challenges over its sentence-level counterpart since it requires an adequate comprehension of the whole document and the multi-hop reasoning ability across multiple sentences to reach the final result.
1 code implementation • COLING 2020 • Yucheng Wang, Bowen Yu, Yueyang Zhang, Tingwen Liu, Hongsong Zhu, Limin Sun
To mitigate the issue, we propose in this paper a one-stage joint extraction model, namely, TPLinker, which is capable of discovering overlapping relations sharing one or both entities while immune from the exposure bias.
Ranked #2 on Relation Extraction on NYT11-HRL
1 code implementation • EMNLP 2020 • Jiawei Sheng, Shu Guo, Zhenyu Chen, Juwei Yue, Lihong Wang, Tingwen Liu, Hongbo Xu
Recent attempts solve this problem by learning static representations of entities and references, ignoring their dynamic properties, i. e., entities may exhibit diverse roles within task relations, and references may make different contributions to queries.
1 code implementation • EMNLP 2020 • Mengge Xue, Bowen Yu, Zhenyu Zhang, Tingwen Liu, Yue Zhang, Bin Wang
More recently, Named Entity Recognition hasachieved great advances aided by pre-trainingapproaches such as BERT.
1 code implementation • 23 Jun 2020 • Xin Cong, Bowen Yu, Tingwen Liu, Shiyao Cui, Hengzhu Tang, Bin Wang
We first build a representation extractor to derive features for unlabeled data from the target domain (no test data is necessary) and then group them with a cluster miner.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Shiyao Cui, Bowen Yu, Tingwen Liu, Zhen-Yu Zhang, Xuebin Wang, Jinqiao Shi
Previous studies on the task have verified the effectiveness of integrating syntactic dependency into graph convolutional networks.
1 code implementation • ACL 2020 • Yanzeng Li, Bowen Yu, Mengge Xue, Tingwen Liu
Most Chinese pre-trained models take character as the basic unit and learn representation according to character's external contexts, ignoring the semantics expressed in the word, which is the smallest meaningful utterance in Chinese.
1 code implementation • 10 Sep 2019 • Bowen Yu, Zhen-Yu Zhang, Xiaobo Shu, Yubin Wang, Tingwen Liu, Bin Wang, Sujian Li
Joint extraction of entities and relations aims to detect entity pairs along with their relations using a single model.
Ranked #1 on Relation Extraction on NYT-single
1 code implementation • IJCAI-19 2019 • Bowen Yu, Zhen-Yu Zhang, Tingwen Liu, Bin Wang, Sujian Li, Quangang Li
Relation extraction studies the issue of predicting semantic relations between pairs of entities in sentences.
Ranked #30 on Relation Extraction on TACRED