1 code implementation • Findings (NAACL) 2022 • Xiang Chen, Ningyu Zhang, Lei LI, Yunzhi Yao, Shumin Deng, Chuanqi Tan, Fei Huang, Luo Si, Huajun Chen
Multimodal named entity recognition and relation extraction (MNER and MRE) is a fundamental and crucial branch in information extraction.
1 code implementation • 1 Mar 2023 • Zheng Yuan, Qiao Jin, Chuanqi Tan, Zhengyun Zhao, Hongyi Yuan, Fei Huang, Songfang Huang
We propose to retrieve similar image-text pairs based on ITC from pretraining datasets and introduce a novel retrieval-attention module to fuse the representation of the image and the question with the retrieved images and texts.
no code implementations • 2 Feb 2023 • Zheng Yuan, Yaoyun Zhang, Chuanqi Tan, Wei Wang, Fei Huang, Songfang Huang
To alleviate this limitation, we propose Moleformer, a novel Transformer architecture that takes nodes (atoms) and edges (bonds and nonbonding atom pairs) as inputs and models the interactions among them using rotational and translational invariant geometry-aware spatial encoding.
2 code implementations • 25 Jan 2023 • Xiang Chen, Lei LI, Shuofei Qiao, Ningyu Zhang, Chuanqi Tan, Yong Jiang, Fei Huang, Huajun Chen
Previous typical solutions mainly obtain a NER model by pre-trained language models (PLMs) with data from a rich-resource domain and adapt it to the target domain.
1 code implementation • 20 Dec 2022 • Hongyi Yuan, Zheng Yuan, Chuanqi Tan, Fei Huang, Songfang Huang
We propose SeqDiffuSeq, a text diffusion model for sequence-to-sequence generation.
2 code implementations • 19 Dec 2022 • Shuofei Qiao, Yixin Ou, Ningyu Zhang, Xiang Chen, Yunzhi Yao, Shumin Deng, Chuanqi Tan, Fei Huang, Huajun Chen
Reasoning, as an essential ability for complex problem-solving, can provide back-end support for various real-world applications, such as medical diagnosis, negotiation, etc.
no code implementations • 17 Dec 2022 • Hongyi Yuan, Zheng Yuan, Chuanqi Tan, Fei Huang, Songfang Huang
Unlike previous works that only add noise to inputs or parameters, we argue that the hidden representations of Transformers layers convey more diverse and meaningful language information.
no code implementations • 17 Oct 2022 • Jianing Wang, Chengcheng Han, Chengyu Wang, Chuanqi Tan, Minghui Qiu, Songfang Huang, Jun Huang, Ming Gao
Few-shot Named Entity Recognition (NER) aims to identify named entities with very little annotated data.
2 code implementations • 29 May 2022 • Xiang Chen, Lei LI, Ningyu Zhang, Xiaozhuan Liang, Shumin Deng, Chuanqi Tan, Fei Huang, Luo Si, Huajun Chen
Specifically, vanilla prompt learning may struggle to utilize atypical instances by rote during fully-supervised training or overfit shallow patterns with low-shot data.
2 code implementations • 23 May 2022 • Yuchao Li, Fuli Luo, Chuanqi Tan, Mengdi Wang, Songfang Huang, Shen Li, Junjie Bai
With the dramatically increased number of parameters in language models, sparsity methods have received ever-increasing research focus to compress and accelerate the models.
1 code implementation • 11 May 2022 • Jianing Wang, Chengyu Wang, Fuli Luo, Chuanqi Tan, Minghui Qiu, Fei Yang, Qiuhui Shi, Songfang Huang, Ming Gao
Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-shot text classification by employing task-specific prompts.
1 code implementation • 7 May 2022 • Xiang Chen, Ningyu Zhang, Lei LI, Yunzhi Yao, Shumin Deng, Chuanqi Tan, Fei Huang, Luo Si, Huajun Chen
To deal with these issues, we propose a novel Hierarchical Visual Prefix fusion NeTwork (HVPNeT) for visual-enhanced entity and relation extraction, aiming to achieve more effective and robust performance.
1 code implementation • 4 May 2022 • Xiang Chen, Lei LI, Ningyu Zhang, Chuanqi Tan, Fei Huang, Luo Si, Huajun Chen
Note that the previous parametric learning paradigm can be viewed as memorization regarding training data as a book and inference as the close-book test.
1 code implementation • 4 May 2022 • Xiang Chen, Ningyu Zhang, Lei LI, Shumin Deng, Chuanqi Tan, Changliang Xu, Fei Huang, Luo Si, Huajun Chen
Since most MKGs are far from complete, extensive knowledge graph completion studies have been proposed focusing on the multimodal entity, relation extraction and link prediction.
1 code implementation • 9 Apr 2022 • Xiaozhuan Liang, Ningyu Zhang, Siyuan Cheng, Zhenru Zhang, Chuanqi Tan, Huajun Chen
Pretrained language models can be effectively stimulated by textual prompts or demonstrations, especially in low-data scenarios.
1 code implementation • ACL 2022 • Zheng Yuan, Chuanqi Tan, Songfang Huang
Automatic ICD coding is defined as assigning disease codes to electronic medical records (EMRs).
Ranked #5 on
Medical Code Prediction
on MIMIC-III
1 code implementation • 10 Jan 2022 • Ningyu Zhang, Xin Xu, Liankuan Tao, Haiyang Yu, Hongbin Ye, Shuofei Qiao, Xin Xie, Xiang Chen, Zhoubo Li, Lei LI, Xiaozhuan Liang, Yunzhi Yao, Shumin Deng, Peng Wang, Wen Zhang, Zhenru Zhang, Chuanqi Tan, Qiang Chen, Feiyu Xiong, Fei Huang, Guozhou Zheng, Huajun Chen
We present an open-source and extensible knowledge extraction toolkit DeepKE, supporting complicated low-resource, document-level and multimodal scenarios in the knowledge base population.
Attribute Extraction
Cross-Domain Named Entity Recognition
+4
no code implementations • 2 Dec 2021 • Shumin Deng, Ningyu Zhang, Jiacheng Yang, Hongbin Ye, Chuanqi Tan, Mosha Chen, Songfang Huang, Fei Huang, Huajun Chen
Previous works leverage logical forms to facilitate logical knowledge-conditioned text generation.
1 code implementation • Findings (ACL) 2022 • Zheng Yuan, Chuanqi Tan, Songfang Huang, Fei Huang
To fuse these heterogeneous factors, we propose a novel triaffine mechanism including triaffine attention and scoring.
Ranked #1 on
Nested Named Entity Recognition
on TAC-KBP 2017
no code implementations • 1 Oct 2021 • Hongbin Ye, Ningyu Zhang, Zhen Bi, Shumin Deng, Chuanqi Tan, Hui Chen, Fei Huang, Huajun Chen
Event argument extraction (EAE) is an important task for information extraction to discover specific argument roles.
3 code implementations • EMNLP 2021 • Runxin Xu, Fuli Luo, Zhiyuan Zhang, Chuanqi Tan, Baobao Chang, Songfang Huang, Fei Huang
Recent pretrained language models extend from millions to billions of parameters.
1 code implementation • COLING 2022 • Xiang Chen, Lei LI, Shumin Deng, Chuanqi Tan, Changliang Xu, Fei Huang, Luo Si, Huajun Chen, Ningyu Zhang
Most NER methods rely on extensive labeled data for model training, which struggles in the low-resource scenarios with limited training data.
2 code implementations • ICLR 2022 • Ningyu Zhang, Luoqiu Li, Xiang Chen, Shumin Deng, Zhen Bi, Chuanqi Tan, Fei Huang, Huajun Chen
Large-scale pre-trained language models have contributed significantly to natural language processing by demonstrating remarkable abilities as few-shot learners.
Ranked #1 on
Few-Shot Learning
on SST-2 Binary classification
1 code implementation • ICCV 2021 • Zheng Yuan, Jie Zhang, Yunpei Jia, Chuanqi Tan, Tao Xue, Shiguang Shan
In recent years, research on adversarial attacks has become a hot spot.
1 code implementation • ACL 2022 • Ningyu Zhang, Mosha Chen, Zhen Bi, Xiaozhuan Liang, Lei LI, Xin Shang, Kangping Yin, Chuanqi Tan, Jian Xu, Fei Huang, Luo Si, Yuan Ni, Guotong Xie, Zhifang Sui, Baobao Chang, Hui Zong, Zheng Yuan, Linfeng Li, Jun Yan, Hongying Zan, Kunli Zhang, Buzhou Tang, Qingcai Chen
Artificial Intelligence (AI), along with the recent progress in biomedical language understanding, is gradually changing medical practice.
Ranked #1 on
Medical Relation Extraction
on CMeIE
2 code implementations • 7 Jun 2021 • Ningyu Zhang, Xiang Chen, Xin Xie, Shumin Deng, Chuanqi Tan, Mosha Chen, Fei Huang, Luo Si, Huajun Chen
Specifically, we leverage an encoder module to capture the context information of entities and a U-shaped segmentation module over the image-style feature map to capture global interdependency among triples.
Ranked #4 on
Relation Extraction
on ReDocRED
1 code implementation • NAACL (BioNLP) 2021 • Zheng Yuan, Yijia Liu, Chuanqi Tan, Songfang Huang, Fei Huang
To this end, we propose KeBioLM, a biomedical pretrained language model that explicitly leverages knowledge from the UMLS knowledge bases.
Ranked #1 on
Named Entity Recognition (NER)
on JNLPBA
1 code implementation • 15 Apr 2021 • Xiang Chen, Ningyu Zhang, Xin Xie, Shumin Deng, Yunzhi Yao, Chuanqi Tan, Fei Huang, Luo Si, Huajun Chen
To this end, we focus on incorporating knowledge among relation labels into prompt-tuning for relation extraction and propose a Knowledge-aware Prompt-tuning approach with synergistic optimization (KnowPrompt).
Ranked #5 on
Dialog Relation Extraction
on DialogRE
(F1 (v1) metric)
1 code implementation • NAACL 2021 • Kun Liu, Yao Fu, Chuanqi Tan, Mosha Chen, Ningyu Zhang, Songfang Huang, Sheng Gao
This work studies NER under a noisy labeled setting with calibrated confidence estimation.
1 code implementation • ICLR 2021 • Boli Chen, Yao Fu, Guangwei Xu, Pengjun Xie, Chuanqi Tan, Mosha Chen, Liping Jing
We introduce a Poincare probe, a structural probe projecting these embeddings into a Poincare subspace with explicitly defined hierarchies.
1 code implementation • 1 Apr 2021 • Luoqiu Li, Xiang Chen, Zhen Bi, Xin Xie, Shumin Deng, Ningyu Zhang, Chuanqi Tan, Mosha Chen, Huajun Chen
Recent neural-based relation extraction approaches, though achieving promising improvement on benchmark datasets, have reported their vulnerability towards adversarial attacks.
no code implementations • 10 Feb 2021 • Qiao Jin, Zheng Yuan, Guangzhi Xiong, Qianlan Yu, Huaiyuan Ying, Chuanqi Tan, Mosha Chen, Songfang Huang, Xiaozhong Liu, Sheng Yu
Automatic Question Answering (QA) has been successfully applied in various domains such as search engines and chatbots.
1 code implementation • 15 Dec 2020 • Yao Fu, Chuanqi Tan, Mosha Chen, Songfang Huang, Fei Huang
With the TreeCRF we achieve a uniform way to jointly model the observed and the latent nodes.
Ranked #11 on
Nested Named Entity Recognition
on ACE 2005
1 code implementation • NeurIPS 2020 • Yao Fu, Chuanqi Tan, Bin Bi, Mosha Chen, Yansong Feng, Alexander M. Rush
Learning to control the structure of sentences is a challenging problem in text generation.
1 code implementation • EMNLP 2020 • Qiao Jin, Chuanqi Tan, Mosha Chen, Xiaozhong Liu, Songfang Huang
In the CTRP framework, a model takes a PICO-formatted clinical trial proposal with its background as input and predicts the result, i. e. how the Intervention group compares with the Comparison group in terms of the measured Outcome in the studied Population.
no code implementations • 14 Sep 2020 • Hongbin Ye, Ningyu Zhang, Shumin Deng, Mosha Chen, Chuanqi Tan, Fei Huang, Huajun Chen
In this paper, we revisit the end-to-end triple extraction task for sequence generation.
Ranked #9 on
Relation Extraction
on WebNLG
no code implementations • 25 Apr 2019 • Chuanqi Tan, Fuchun Sun, Tao Kong, Bin Fang, Wenchang Zhang
Different functional areas of the human brain play different roles in brain activity, which has not been paid sufficient research attention in the brain-computer interface (BCI) field.
no code implementations • 12 Sep 2018 • Hangbo Bao, Shaohan Huang, Furu Wei, Lei Cui, Yu Wu, Chuanqi Tan, Songhao Piao, Ming Zhou
In this paper, we study a novel task that learns to compose music from natural language.
no code implementations • 6 Aug 2018 • Chuanqi Tan, Fuchun Sun, Wenchang Zhang
First, we model cognitive events based on EEG data by characterizing the data using EEG optical flow, which is designed to preserve multimodal EEG information in a uniform representation.
no code implementations • 6 Aug 2018 • Chuanqi Tan, Fuchun Sun, Tao Kong, Wenchang Zhang, Chao Yang, Chunfang Liu
As a new classification platform, deep learning has recently received increasing attention from researchers and has been successfully applied to many domains.
no code implementations • 24 Jul 2018 • Chuanqi Tan, Fuchun Sun, Wenchang Zhang, Jianhua Chen, Chunfang Liu
Herein, we propose a novel approach to modeling cognitive events from EEG data by reducing it to a video classification problem, which is designed to preserve the multimodal information of EEG.
1 code implementation • IJCAI 2018 • Chuanqi Tan, Furu Wei, Wenhui Wang, Weifeng Lv, Ming Zhou
Modeling sentence pairs plays the vital role for judging the relationship between two sentences, such as paraphrase identification, natural language inference, and answer sentence selection.
Ranked #11 on
Paraphrase Identification
on Quora Question Pairs
(Accuracy metric)
no code implementations • 15 Jun 2017 • Chuanqi Tan, Furu Wei, Nan Yang, Bowen Du, Weifeng Lv, Ming Zhou
We build the answer extraction model with state-of-the-art neural networks for single passage reading comprehension, and propose an additional task of passage ranking to help answer extraction in multiple passages.
no code implementations • EMNLP 2017 • Chuanqi Tan, Furu Wei, Pengjie Ren, Weifeng Lv, Ming Zhou
The key idea is to search sentences similar to a query from Wikipedia articles and directly use the human-annotated entities in the similar sentences as candidate entities for the query.
4 code implementations • 6 Apr 2017 • Qingyu Zhou, Nan Yang, Furu Wei, Chuanqi Tan, Hangbo Bao, Ming Zhou
Automatic question generation aims to generate questions from a text passage where the generated questions can be answered by certain sub-spans of the given passage.
Ranked #13 on
Question Generation
on SQuAD1.1