Search Results for author: Chuanqi Tan

Found 59 papers, 39 papers with code

Neural Question Generation from Text: A Preliminary Study

6 code implementations6 Apr 2017 Qingyu Zhou, Nan Yang, Furu Wei, Chuanqi Tan, Hangbo Bao, Ming Zhou

Automatic question generation aims to generate questions from a text passage where the generated questions can be answered by certain sub-spans of the given passage.

Position Question Generation +2

Entity Linking for Queries by Searching Wikipedia Sentences

no code implementations EMNLP 2017 Chuanqi Tan, Furu Wei, Pengjie Ren, Weifeng Lv, Ming Zhou

The key idea is to search sentences similar to a query from Wikipedia articles and directly use the human-annotated entities in the similar sentences as candidate entities for the query.

Entity Linking Word Embeddings

S-Net: From Answer Extraction to Answer Generation for Machine Reading Comprehension

no code implementations15 Jun 2017 Chuanqi Tan, Furu Wei, Nan Yang, Bowen Du, Weifeng Lv, Ming Zhou

We build the answer extraction model with state-of-the-art neural networks for single passage reading comprehension, and propose an additional task of passage ranking to help answer extraction in multiple passages.

Answer Generation Machine Reading Comprehension +1

Multiway Attention Networks for Modeling Sentence Pairs

1 code implementation IJCAI 2018 Chuanqi Tan, Furu Wei, Wenhui Wang, Weifeng Lv, Ming Zhou

Modeling sentence pairs plays the vital role for judging the relationship between two sentences, such as paraphrase identification, natural language inference, and answer sentence selection.

Natural Language Inference Paraphrase Identification +1

Multimodal Classification with Deep Convolutional-Recurrent Neural Networks for Electroencephalography

no code implementations24 Jul 2018 Chuanqi Tan, Fuchun Sun, Wenchang Zhang, Jianhua Chen, Chunfang Liu

Herein, we propose a novel approach to modeling cognitive events from EEG data by reducing it to a video classification problem, which is designed to preserve the multimodal information of EEG.

Brain Computer Interface Classification +5

A Survey on Deep Transfer Learning

no code implementations6 Aug 2018 Chuanqi Tan, Fuchun Sun, Tao Kong, Wenchang Zhang, Chao Yang, Chunfang Liu

As a new classification platform, deep learning has recently received increasing attention from researchers and has been successfully applied to many domains.

General Classification Transfer Learning

Deep Transfer Learning for EEG-based Brain Computer Interface

no code implementations6 Aug 2018 Chuanqi Tan, Fuchun Sun, Wenchang Zhang

First, we model cognitive events based on EEG data by characterizing the data using EEG optical flow, which is designed to preserve multimodal EEG information in a uniform representation.

Brain Computer Interface EEG +3

Neural Melody Composition from Lyrics

no code implementations12 Sep 2018 Hangbo Bao, Shaohan Huang, Furu Wei, Lei Cui, Yu Wu, Chuanqi Tan, Songhao Piao, Ming Zhou

In this paper, we study a novel task that learns to compose music from natural language.

Attention-based Transfer Learning for Brain-computer Interface

no code implementations25 Apr 2019 Chuanqi Tan, Fuchun Sun, Tao Kong, Bin Fang, Wenchang Zhang

Different functional areas of the human brain play different roles in brain activity, which has not been paid sufficient research attention in the brain-computer interface (BCI) field.

Brain Computer Interface Classification +4

Predicting Clinical Trial Results by Implicit Evidence Integration

1 code implementation EMNLP 2020 Qiao Jin, Chuanqi Tan, Mosha Chen, Xiaozhong Liu, Songfang Huang

In the CTRP framework, a model takes a PICO-formatted clinical trial proposal with its background as input and predicts the result, i. e. how the Intervention group compares with the Comparison group in terms of the measured Outcome in the studied Population.

PICO

Normal vs. Adversarial: Salience-based Analysis of Adversarial Samples for Relation Extraction

1 code implementation1 Apr 2021 Luoqiu Li, Xiang Chen, Zhen Bi, Xin Xie, Shumin Deng, Ningyu Zhang, Chuanqi Tan, Mosha Chen, Huajun Chen

Recent neural-based relation extraction approaches, though achieving promising improvement on benchmark datasets, have reported their vulnerability towards adversarial attacks.

Relation Relation Extraction

Probing BERT in Hyperbolic Spaces

1 code implementation ICLR 2021 Boli Chen, Yao Fu, Guangwei Xu, Pengjun Xie, Chuanqi Tan, Mosha Chen, Liping Jing

We introduce a Poincare probe, a structural probe projecting these embeddings into a Poincare subspace with explicitly defined hierarchies.

Word Embeddings

KnowPrompt: Knowledge-aware Prompt-tuning with Synergistic Optimization for Relation Extraction

1 code implementation15 Apr 2021 Xiang Chen, Ningyu Zhang, Xin Xie, Shumin Deng, Yunzhi Yao, Chuanqi Tan, Fei Huang, Luo Si, Huajun Chen

To this end, we focus on incorporating knowledge among relation labels into prompt-tuning for relation extraction and propose a Knowledge-aware Prompt-tuning approach with synergistic optimization (KnowPrompt).

Ranked #5 on Dialog Relation Extraction on DialogRE (F1 (v1) metric)

Dialog Relation Extraction Language Modelling +3

Document-level Relation Extraction as Semantic Segmentation

2 code implementations7 Jun 2021 Ningyu Zhang, Xiang Chen, Xin Xie, Shumin Deng, Chuanqi Tan, Mosha Chen, Fei Huang, Luo Si, Huajun Chen

Specifically, we leverage an encoder module to capture the context information of entities and a U-shaped segmentation module over the image-style feature map to capture global interdependency among triples.

Document-level Relation Extraction Relation +2

Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners

4 code implementations ICLR 2022 Ningyu Zhang, Luoqiu Li, Xiang Chen, Shumin Deng, Zhen Bi, Chuanqi Tan, Fei Huang, Huajun Chen

Large-scale pre-trained language models have contributed significantly to natural language processing by demonstrating remarkable abilities as few-shot learners.

Language Modelling Prompt Engineering

Learning to Ask for Data-Efficient Event Argument Extraction

no code implementations1 Oct 2021 Hongbin Ye, Ningyu Zhang, Zhen Bi, Shumin Deng, Chuanqi Tan, Hui Chen, Fei Huang, Huajun Chen

Event argument extraction (EAE) is an important task for information extraction to discover specific argument roles.

Event Argument Extraction

Contrastive Demonstration Tuning for Pre-trained Language Models

1 code implementation9 Apr 2022 Xiaozhuan Liang, Ningyu Zhang, Siyuan Cheng, Zhenru Zhang, Chuanqi Tan, Huajun Chen

Pretrained language models can be effectively stimulated by textual prompts or demonstrations, especially in low-data scenarios.

Hybrid Transformer with Multi-level Fusion for Multimodal Knowledge Graph Completion

1 code implementation4 May 2022 Xiang Chen, Ningyu Zhang, Lei LI, Shumin Deng, Chuanqi Tan, Changliang Xu, Fei Huang, Luo Si, Huajun Chen

Since most MKGs are far from complete, extensive knowledge graph completion studies have been proposed focusing on the multimodal entity, relation extraction and link prediction.

Information Retrieval Link Prediction +4

Relation Extraction as Open-book Examination: Retrieval-enhanced Prompt Tuning

1 code implementation4 May 2022 Xiang Chen, Lei LI, Ningyu Zhang, Chuanqi Tan, Fei Huang, Luo Si, Huajun Chen

Note that the previous parametric learning paradigm can be viewed as memorization regarding training data as a book and inference as the close-book test.

Few-Shot Learning Memorization +3

Good Visual Guidance Makes A Better Extractor: Hierarchical Visual Prefix for Multimodal Entity and Relation Extraction

1 code implementation7 May 2022 Xiang Chen, Ningyu Zhang, Lei LI, Yunzhi Yao, Shumin Deng, Chuanqi Tan, Fei Huang, Luo Si, Huajun Chen

To deal with these issues, we propose a novel Hierarchical Visual Prefix fusion NeTwork (HVPNeT) for visual-enhanced entity and relation extraction, aiming to achieve more effective and robust performance.

named-entity-recognition Named Entity Recognition +3

Towards Unified Prompt Tuning for Few-shot Text Classification

1 code implementation11 May 2022 Jianing Wang, Chengyu Wang, Fuli Luo, Chuanqi Tan, Minghui Qiu, Fei Yang, Qiuhui Shi, Songfang Huang, Ming Gao

Prompt-based fine-tuning has boosted the performance of Pre-trained Language Models (PLMs) on few-shot text classification by employing task-specific prompts.

Few-Shot Learning Few-Shot Text Classification +4

Parameter-Efficient Sparsity for Large Language Models Fine-Tuning

2 code implementations23 May 2022 Yuchao Li, Fuli Luo, Chuanqi Tan, Mengdi Wang, Songfang Huang, Shen Li, Junjie Bai

With the dramatically increased number of parameters in language models, sparsity methods have received ever-increasing research focus to compress and accelerate the models.

Decoupling Knowledge from Memorization: Retrieval-augmented Prompt Learning

2 code implementations29 May 2022 Xiang Chen, Lei LI, Ningyu Zhang, Xiaozhuan Liang, Shumin Deng, Chuanqi Tan, Fei Huang, Luo Si, Huajun Chen

Specifically, vanilla prompt learning may struggle to utilize atypical instances by rote during fully-supervised training or overfit shallow patterns with low-shot data.

Few-Shot Text Classification Memorization +5

HyPe: Better Pre-trained Language Model Fine-tuning with Hidden Representation Perturbation

1 code implementation17 Dec 2022 Hongyi Yuan, Zheng Yuan, Chuanqi Tan, Fei Huang, Songfang Huang

Unlike previous works that only add noise to inputs or parameters, we argue that the hidden representations of Transformers layers convey more diverse and meaningful language information.

Language Modelling Natural Language Inference

Reasoning with Language Model Prompting: A Survey

2 code implementations19 Dec 2022 Shuofei Qiao, Yixin Ou, Ningyu Zhang, Xiang Chen, Yunzhi Yao, Shumin Deng, Chuanqi Tan, Fei Huang, Huajun Chen

Reasoning, as an essential ability for complex problem-solving, can provide back-end support for various real-world applications, such as medical diagnosis, negotiation, etc.

Arithmetic Reasoning Common Sense Reasoning +4

One Model for All Domains: Collaborative Domain-Prefix Tuning for Cross-Domain NER

2 code implementations25 Jan 2023 Xiang Chen, Lei LI, Shuofei Qiao, Ningyu Zhang, Chuanqi Tan, Yong Jiang, Fei Huang, Huajun Chen

Previous typical solutions mainly obtain a NER model by pre-trained language models (PLMs) with data from a rich-resource domain and adapt it to the target domain.

NER Text Generation

Molecular Geometry-aware Transformer for accurate 3D Atomic System modeling

no code implementations2 Feb 2023 Zheng Yuan, Yaoyun Zhang, Chuanqi Tan, Wei Wang, Fei Huang, Songfang Huang

To alleviate this limitation, we propose Moleformer, a novel Transformer architecture that takes nodes (atoms) and edges (bonds and nonbonding atom pairs) as inputs and models the interactions among them using rotational and translational invariant geometry-aware spatial encoding.

Initial Structure to Relaxed Energy (IS2RE), Direct

RAMM: Retrieval-augmented Biomedical Visual Question Answering with Multi-modal Pre-training

1 code implementation1 Mar 2023 Zheng Yuan, Qiao Jin, Chuanqi Tan, Zhengyun Zhao, Hongyi Yuan, Fei Huang, Songfang Huang

We propose to retrieve similar image-text pairs based on ITC from pretraining datasets and introduce a novel retrieval-attention module to fuse the representation of the image and the question with the retrieved images and texts.

Question Answering Retrieval +1

How well do Large Language Models perform in Arithmetic tasks?

1 code implementation16 Mar 2023 Zheng Yuan, Hongyi Yuan, Chuanqi Tan, Wei Wang, Songfang Huang

Large language models have emerged abilities including chain-of-thought to answer math word problems step by step.

Math

RRHF: Rank Responses to Align Language Models with Human Feedback without tears

1 code implementation11 Apr 2023 Zheng Yuan, Hongyi Yuan, Chuanqi Tan, Wei Wang, Songfang Huang, Fei Huang

Reinforcement Learning from Human Feedback (RLHF) facilitates the alignment of large language models with human preferences, significantly enhancing the quality of interactions between humans and models.

Language Modelling Large Language Model

VECO 2.0: Cross-lingual Language Model Pre-training with Multi-granularity Contrastive Learning

no code implementations17 Apr 2023 Zhen-Ru Zhang, Chuanqi Tan, Songfang Huang, Fei Huang

Recent studies have demonstrated the potential of cross-lingual transferability by training a unified Transformer encoder for multiple languages.

Contrastive Learning Language Modelling +1

Knowledge Rumination for Pre-trained Language Models

1 code implementation15 May 2023 Yunzhi Yao, Peng Wang, Shengyu Mao, Chuanqi Tan, Fei Huang, Huajun Chen, Ningyu Zhang

Previous studies have revealed that vanilla pre-trained language models (PLMs) lack the capacity to handle knowledge-intensive NLP tasks alone; thus, several works have attempted to integrate external knowledge into PLMs.

Language Modelling

Towards Adaptive Prefix Tuning for Parameter-Efficient Language Model Fine-tuning

no code implementations24 May 2023 Zhen-Ru Zhang, Chuanqi Tan, Haiyang Xu, Chengyu Wang, Jun Huang, Songfang Huang

In addition, taking the gate as a probing, we validate the efficiency and effectiveness of the variable prefix.

Language Modelling NER

Scaling Relationship on Learning Mathematical Reasoning with Large Language Models

1 code implementation3 Aug 2023 Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting Dong, Keming Lu, Chuanqi Tan, Chang Zhou, Jingren Zhou

We find with augmented samples containing more distinct reasoning paths, RFT improves mathematical reasoning performance more for LLMs.

Ranked #99 on Arithmetic Reasoning on GSM8K (using extra training data)

Arithmetic Reasoning GSM8K +1

#InsTag: Instruction Tagging for Analyzing Supervised Fine-tuning of Large Language Models

1 code implementation14 Aug 2023 Keming Lu, Hongyi Yuan, Zheng Yuan, Runji Lin, Junyang Lin, Chuanqi Tan, Chang Zhou, Jingren Zhou

Based on this observation, we propose a data selector based on InsTag to select 6K diverse and complex samples from open-source datasets and fine-tune models on InsTag-selected data.

Instruction Following TAG

Boosting In-Context Learning with Factual Knowledge

no code implementations26 Sep 2023 Jianing Wang, Chengyu Wang, Chuanqi Tan, Jun Huang, Ming Gao

In-Context Learning (ICL) over Large language models (LLMs) aims at solving previously unseen tasks by conditioning on a few training examples, eliminating the need for parameter updates and achieving competitive performance.

Few-Shot Learning In-Context Learning +3

Query and Response Augmentation Cannot Help Out-of-domain Math Reasoning Generalization

1 code implementation9 Oct 2023 Chengpeng Li, Zheng Yuan, Hongyi Yuan, Guanting Dong, Keming Lu, Jiancan Wu, Chuanqi Tan, Xiang Wang, Chang Zhou

In this paper, we conduct an investigation for such data augmentation in math reasoning and are intended to answer: (1) What strategies of data augmentation are more effective; (2) What is the scaling relationship between the amount of augmented data and model performance; and (3) Can data augmentation incentivize generalization to out-of-domain mathematical reasoning tasks?

Ranked #53 on Arithmetic Reasoning on GSM8K (using extra training data)

Arithmetic Reasoning Data Augmentation +3

Sharing, Teaching and Aligning: Knowledgeable Transfer Learning for Cross-Lingual Machine Reading Comprehension

no code implementations12 Nov 2023 Tingfeng Cao, Chengyu Wang, Chuanqi Tan, Jun Huang, Jinhui Zhu

In cross-lingual language understanding, machine translation is often utilized to enhance the transferability of models across languages, either by translating the training data from the source language to the target, or from the target to the source to aid inference.

Cross-Lingual Transfer Machine Reading Comprehension +2

Cannot find the paper you are looking for? You can Submit a new open access paper.