Search Results for author: Xuanjing Huang

Found 261 papers, 120 papers with code

A Re-ranking Model for Dependency Parser with Recursive Convolutional Neural Network

no code implementations IJCNLP 2015 Chenxi Zhu, Xipeng Qiu, Xinchi Chen, Xuanjing Huang

In this work, we address the problem to model all the nodes (words or phrases) in a dependency tree with the dense representations.

Dependency Parsing Re-Ranking

Overview of the NLPCC 2015 Shared Task: Chinese Word Segmentation and POS Tagging for Micro-blog Texts

no code implementations28 May 2015 Xipeng Qiu, Peng Qian, Liusong Yin, Shiyu Wu, Xuanjing Huang

In this paper, we give an overview for the shared task at the 4th CCF Conference on Natural Language Processing \& Chinese Computing (NLPCC 2015): Chinese word segmentation and part-of-speech (POS) tagging for micro-blog texts.

Chinese Word Segmentation Part-Of-Speech Tagging +3

Gaussian Mixture Embeddings for Multiple Word Prototypes

no code implementations19 Nov 2015 Xinchi Chen, Xipeng Qiu, Jingxiang Jiang, Xuanjing Huang

In this paper, we propose the Gaussian mixture skip-gram (GMSG) model to learn the Gaussian mixture embeddings for words based on skip-gram framework.

Bridging LSTM Architecture and the Neural Dynamics during Reading

no code implementations22 Apr 2016 Peng Qian, Xipeng Qiu, Xuanjing Huang

Recently, the long short-term memory neural network (LSTM) has attracted wide interest due to its success in many tasks.

Modelling Interaction of Sentence Pair with coupled-LSTMs

no code implementations EMNLP 2016 Pengfei Liu, Xipeng Qiu, Xuanjing Huang

Recently, there is rising interest in modelling the interactions of two sentences with deep neural networks.

Sentence

Syntax-based Attention Model for Natural Language Inference

no code implementations22 Jul 2016 PengFei Liu, Xipeng Qiu, Xuanjing Huang

Introducing attentional mechanism in neural network is a powerful concept, and has achieved impressive results in many natural language processing tasks.

Natural Language Inference Sentence

Neural Sentence Ordering

no code implementations23 Jul 2016 Xinchi Chen, Xipeng Qiu, Xuanjing Huang

Sentence ordering is a general and critical task for natural language generation applications.

Document Summarization Multi-Document Summarization +2

Learning Word Embeddings from Intrinsic and Extrinsic Views

no code implementations20 Aug 2016 Jifan Chen, Kan Chen, Xipeng Qiu, Qi Zhang, Xuanjing Huang, Zheng Zhang

To prove the effectiveness of our model, we evaluate it on four tasks, including word similarity, reverse dictionaries, Wiki link prediction, and document classification.

Descriptive Document Classification +4

Deep Multi-Task Learning with Shared Memory

no code implementations23 Sep 2016 Pengfei Liu, Xipeng Qiu, Xuanjing Huang

Neural network based models have achieved impressive results on various specific tasks.

General Classification Multi-Task Learning +2

End-to-End Neural Sentence Ordering Using Pointer Network

no code implementations15 Nov 2016 Jingjing Gong, Xinchi Chen, Xipeng Qiu, Xuanjing Huang

However, it is nontrivial for pair-wise models to incorporate the contextual sentence information.

Sentence Sentence Ordering

A Feature-Enriched Neural Model for Joint Chinese Word Segmentation and Part-of-Speech Tagging

no code implementations16 Nov 2016 Xinchi Chen, Xipeng Qiu, Xuanjing Huang

Recently, neural network models for natural language processing tasks have been increasingly focused on for their ability of alleviating the burden of manual feature engineering.

Chinese Word Segmentation Feature Engineering +1

Knowledge Graph Representation with Jointly Structural and Textual Encoding

no code implementations26 Nov 2016 Jiacheng Xu, Kan Chen, Xipeng Qiu, Xuanjing Huang

In this paper, we propose a novel deep architecture to utilize both structural and textual information of entities.

General Classification Knowledge Graph Embedding +2

Hashtag Recommendation Using End-To-End Memory Networks with Hierarchical Attention

no code implementations COLING 2016 Haoran Huang, Qi Zhang, Yeyun Gong, Xuanjing Huang

By incorporating the hierarchical attention mechanism, the relative improvement in the proposed method over the state-of-the-art method is around 67. 9{\%} in the F1-score.

Collaborative Filtering General Classification +3

Adversarial Multi-task Learning for Text Classification

no code implementations ACL 2017 Pengfei Liu, Xipeng Qiu, Xuanjing Huang

Neural network models have shown their promising opportunities for multi-task learning, which focus on learning the shared layers to extract the common and task-invariant features.

General Classification Multi-Task Learning +2

Dynamic Compositional Neural Networks over Tree Structure

no code implementations11 May 2017 Pengfei Liu, Xipeng Qiu, Xuanjing Huang

Tree-structured neural networks have proven to be effective in learning semantic representations by exploiting syntactic information.

Learning Semantic Representations

Overview of the NLPCC 2017 Shared Task: Chinese News Headline Categorization

1 code implementation9 Jun 2017 Xipeng Qiu, Jingjing Gong, Xuanjing Huang

In this paper, we give an overview for the shared task at the CCF Conference on Natural Language Processing \& Chinese Computing (NLPCC 2017): Chinese News Headline Categorization.

DAG-based Long Short-Term Memory for Neural Word Segmentation

no code implementations2 Jul 2017 Xinchi Chen, Zhan Shi, Xipeng Qiu, Xuanjing Huang

In this paper, we propose a new neural model to incorporate the word-level information for Chinese word segmentation.

Chinese Word Segmentation Feature Engineering +2

Idiom-Aware Compositional Distributed Semantics

no code implementations EMNLP 2017 Pengfei Liu, Kaiyu Qian, Xipeng Qiu, Xuanjing Huang

Idioms are peculiar linguistic constructions that impose great challenges for representing the semantics of language, especially in current prevailing end-to-end neural models, which assume that the semantics of a phrase or sentence can be literally composed from its constitutive words.

General Classification Machine Translation +4

Meta Multi-Task Learning for Sequence Modeling

no code implementations25 Feb 2018 Junkun Chen, Xipeng Qiu, Pengfei Liu, Xuanjing Huang

Specifically, we use a shared meta-network to capture the meta-knowledge of semantic composition and generate the parameters of the task-specific semantic composition models.

Multi-Task Learning Representation Learning +3

Incorporating Discriminator in Sentence Generation: a Gibbs Sampling Method

no code implementations25 Feb 2018 Jinyue Su, Jiacheng Xu, Xipeng Qiu, Xuanjing Huang

Generating plausible and fluent sentence with desired properties has long been a challenge.

Sentence

Information Aggregation via Dynamic Routing for Sequence Encoding

2 code implementations COLING 2018 Jingjing Gong, Xipeng Qiu, Shaojing Wang, Xuanjing Huang

The dynamic routing policy is dynamically deciding that what and how much information need be transferred from each word to the final encoding of the text sequence.

Sentiment Analysis text-classification +1

Gaussian Word Embedding with a Wasserstein Distance Loss

no code implementations21 Aug 2018 Chi Sun, Hang Yan, Xipeng Qiu, Xuanjing Huang

Therefore, with the aim of representing words in a highly efficient way, we propose to operate a Gaussian word embedding model with a loss function based on the Wasserstein distance.

Document Classification General Classification +1

Deformable Stacked Structure for Named Entity Recognition

no code implementations24 Sep 2018 Shuyang Cao, Xipeng Qiu, Xuanjing Huang

Neural architecture for named entity recognition has achieved great success in the field of natural language processing.

named-entity-recognition Named Entity Recognition +1

Transferring from Formal Newswire Domain with Hypernet for Twitter POS Tagging

no code implementations EMNLP 2018 Tao Gui, Qi Zhang, Jingjing Gong, Minlong Peng, Di Liang, Keyu Ding, Xuanjing Huang

However, from a linguistic perspective, Twitter users not only tend to mimic the formal expressions of traditional media, like news, but they also appear to be developing linguistically informal styles.

Domain Adaptation Multi-Task Learning +4

Meta-Learning Multi-task Communication

no code implementations23 Oct 2018 Pengfei Liu, Xuanjing Huang

In this paper, we describe a general framework: Parameters Read-Write Networks (PRaWNs) to systematically analyze current neural models for multi-task learning, in which we find that existing models expect to disentangle features into different spaces while features learned in practice are still entangled in shared space, leaving potential hazards for other training or unseen tasks.

Inductive Bias Meta-Learning +1

Incorporating Topic Aspects for Online Comment Convincingness Evaluation

no code implementations WS 2018 Yunfan Gu, Zhongyu Wei, Maoran Xu, Hao Fu, Yang Liu, Xuanjing Huang

In this paper, we propose to incorporate topic aspects information for online comments convincingness evaluation.

Argument Mining

Long Short-Term Memory with Dynamic Skip Connections

1 code implementation9 Nov 2018 Tao Gui, Qi Zhang, Lujun Zhao, Yaosong Lin, Minlong Peng, Jingjing Gong, Xuanjing Huang

In recent years, long short-term memory (LSTM) has been successfully used to model sequential data of variable length.

Named Entity Recognition (NER) Sentiment Analysis

Contextualized Non-local Neural Networks for Sequence Learning

no code implementations21 Nov 2018 Pengfei Liu, Shuaichen Chang, Xuanjing Huang, Jian Tang, Jackie Chi Kit Cheung

Recently, a large number of neural mechanisms and models have been proposed for sequence learning, of which self-attention, as exemplified by the Transformer model, and graph neural networks (GNNs) have attracted much attention.

General Classification Sentence +2

VCWE: Visual Character-Enhanced Word Embeddings

1 code implementation NAACL 2019 Chi Sun, Xipeng Qiu, Xuanjing Huang

Chinese is a logographic writing system, and the shape of Chinese characters contain rich syntactic and semantic information.

named-entity-recognition Named Entity Recognition +5

A Graph-based Model for Joint Chinese Word Segmentation and Dependency Parsing

1 code implementation TACL 2020 Hang Yan, Xipeng Qiu, Xuanjing Huang

Our graph-based joint model achieves better performance than previous joint models and state-of-the-art results in both Chinese word segmentation and dependency parsing.

Chinese Word Segmentation Dependency Parsing +3

How to Fine-Tune BERT for Text Classification?

16 code implementations14 May 2019 Chi Sun, Xipeng Qiu, Yige Xu, Xuanjing Huang

Language model pre-training has proven to be useful in learning universal language representations.

General Classification Language Modelling +2

A Concise Model for Multi-Criteria Chinese Word Segmentation with Transformer Encoder

1 code implementation Findings of the Association for Computational Linguistics 2020 Xipeng Qiu, Hengzhi Pei, Hang Yan, Xuanjing Huang

Multi-criteria Chinese word segmentation (MCCWS) aims to exploit the relations among the multiple heterogeneous segmentation criteria and further improve the performance of each single criterion.

Chinese Word Segmentation Multi-Task Learning +1

Bridging by Word: Image Grounded Vocabulary Construction for Visual Captioning

1 code implementation ACL 2019 Zhihao Fan, Zhongyu Wei, Siyuan Wang, Xuanjing Huang

Existing research usually employs the architecture of CNN-RNN that views the generation as a sequential decision-making process and the entire dataset vocabulary is used as decoding space.

Decision Making Image Captioning

Generating Responses with a Specific Emotion in Dialog

no code implementations ACL 2019 Zhenqiao Song, Xiaoqing Zheng, Lu Liu, Mu Xu, Xuanjing Huang

It is desirable for dialog systems to have capability to express specific emotions during a conversation, which has a direct, quantifiable impact on improvement of their usability and user satisfaction.

DropAttention: A Regularization Method for Fully-Connected Self-Attention Networks

no code implementations25 Jul 2019 Lin Zehui, PengFei Liu, Luyao Huang, Junkun Chen, Xipeng Qiu, Xuanjing Huang

Variants dropout methods have been designed for the fully-connected layer, convolutional layer and recurrent layer in neural networks, and shown to be effective to avoid overfitting.

Simplify the Usage of Lexicon in Chinese NER

2 code implementations ACL 2020 Ruotian Ma, Minlong Peng, Qi Zhang, Xuanjing Huang

This method avoids designing a complicated sequence modeling architecture, and for any neural NER model, it requires only subtle adjustment of the character representation layer to introduce the lexicon information.

Chinese Named Entity Recognition named-entity-recognition +2

Exploring Domain Shift in Extractive Text Summarization

no code implementations30 Aug 2019 Danqing Wang, PengFei Liu, Ming Zhong, Jie Fu, Xipeng Qiu, Xuanjing Huang

Although domain shift has been well explored in many NLP applications, it still has received little attention in the domain of extractive text summarization.

Extractive Text Summarization Meta-Learning

Weighed Domain-Invariant Representation Learning for Cross-domain Sentiment Analysis

no code implementations COLING 2020 Minlong Peng, Qi Zhang, Xuanjing Huang

To address this problem, we propose a modification to DIRL, obtaining a novel weighted domain-invariant representation learning (WDIRL) framework.

Domain Adaptation Representation Learning +1

Towards Interpretable Evaluations: A Case Study of Named Entity Recognition

no code implementations25 Sep 2019 Jinlan Fu, PengFei Liu, Xuanjing Huang

With the proliferation of models for natural language processing (NLP) tasks, it is even harder to understand the differences between models and their relative merits.

named-entity-recognition Named Entity Recognition +1

A Closer Look at Data Bias in Neural Extractive Summarization Models

no code implementations WS 2019 Ming Zhong, Danqing Wang, PengFei Liu, Xipeng Qiu, Xuanjing Huang

In this paper, we take stock of the current state of summarization datasets and explore how different factors of datasets influence the generalization behaviour of neural extractive summarization models.

Extractive Summarization

Asynchronous Deep Interaction Network for Natural Language Inference

no code implementations IJCNLP 2019 Di Liang, Fubao Zhang, Qi Zhang, Xuanjing Huang

However, in the process of reasoning, the role of the two sentences is obviously different, and the sentence pairs for NLI are asymmetrical corpora.

Natural Language Inference Sentence

A Lexicon-Based Graph Neural Network for Chinese NER

no code implementations IJCNLP 2019 Tao Gui, Yicheng Zou, Qi Zhang, Minlong Peng, Jinlan Fu, Zhongyu Wei, Xuanjing Huang

Recurrent neural networks (RNN) used for Chinese named entity recognition (NER) that sequentially track character and word information have achieved great success.

Chinese Named Entity Recognition named-entity-recognition +3

Discrete Argument Representation Learning for Interactive Argument Pair Identification

1 code implementation NAACL 2021 Lu Ji, Zhongyu Wei, Jing Li, Qi Zhang, Xuanjing Huang

In this paper, we focus on extracting interactive argument pairs from two posts with opposite stances to a certain topic.

Representation Learning

Learning Sparse Sharing Architectures for Multiple Tasks

1 code implementation12 Nov 2019 Tianxiang Sun, Yunfan Shao, Xiaonan Li, PengFei Liu, Hang Yan, Xipeng Qiu, Xuanjing Huang

Most existing deep multi-task learning models are based on parameter sharing, such as hard sharing, hierarchical sharing, and soft sharing.

Multi-Task Learning

Constructing Multiple Tasks for Augmentation: Improving Neural Image Classification With K-means Features

1 code implementation18 Nov 2019 Tao Gui, Lizhi Qing, Qi Zhang, Jiacheng Ye, HangYan, Zichu Fei, Xuanjing Huang

In order to effectively reduce the impact of non-ideal auxiliary tasks on the main task, we further proposed a novel meta-learning-based multi-task learning approach, which trained the shared hidden layers on auxiliary tasks, while the meta-optimization objective was to minimize the loss on the main task, ensuring that the optimizing direction led to an improvement on the main task.

Clustering Data Augmentation +4

Chinese Named Entity Recognition Augmented with Lexicon Memory

1 code implementation17 Dec 2019 Yi Zhou, Xiaoqing Zheng, Xuanjing Huang

Inspired by a concept of content-addressable retrieval from cognitive science, we propose a novel fragment-based model augmented with a lexicon-based memory for Chinese NER, in which both the character-level and word-level features are combined to generate better feature representations for possible name candidates.

Chinese Named Entity Recognition named-entity-recognition +4

Rethinking Generalization of Neural Models: A Named Entity Recognition Case Study

1 code implementation12 Jan 2020 Jinlan Fu, PengFei Liu, Qi Zhang, Xuanjing Huang

While neural network-based models have achieved impressive performance on a large body of NLP tasks, the generalization behavior of different models remains poorly understood: Does this excellent performance imply a perfect generalization model, or are there still some limitations?

named-entity-recognition Named Entity Recognition +1

Improving BERT Fine-Tuning via Self-Ensemble and Self-Distillation

1 code implementation24 Feb 2020 Yige Xu, Xipeng Qiu, Ligao Zhou, Xuanjing Huang

Fine-tuning pre-trained language models like BERT has become an effective way in NLP and yields state-of-the-art results on many downstream tasks.

Natural Language Inference text-classification +1

Pre-trained Models for Natural Language Processing: A Survey

3 code implementations18 Mar 2020 Xipeng Qiu, Tianxiang Sun, Yige Xu, Yunfan Shao, Ning Dai, Xuanjing Huang

Recently, the emergence of pre-trained models (PTMs) has brought natural language processing (NLP) to a new era.

Representation Learning

Storytelling from an Image Stream Using Scene Graphs

no code implementations The Thirty-Fourth AAAI Conference on Artificial Intelligence 2020 Ruize Wang, Zhongyu Wei, Piji Li, Qi Zhang, Xuanjing Huang

In particular, on the within-image level, we employ a Graph Convolution Network (GCN) to enrich local fine-grained region representations of objects on scene graphs.

Visual Storytelling

Unified Multi-Criteria Chinese Word Segmentation with BERT

no code implementations13 Apr 2020 Zhen Ke, Liang Shi, Erli Meng, Bin Wang, Xipeng Qiu, Xuanjing Huang

Besides, the pre-trained BERT language model has been also introduced into the MCCWS task in a multi-task learning framework.

Chinese Word Segmentation Language Modelling +3

FLAT: Chinese NER Using Flat-Lattice Transformer

1 code implementation ACL 2020 Xiaonan Li, Hang Yan, Xipeng Qiu, Xuanjing Huang

Recently, the character-word lattice structure has been proved to be effective for Chinese named entity recognition (NER) by incorporating the word information.

Chinese Named Entity Recognition named-entity-recognition +3

Heterogeneous Graph Neural Networks for Extractive Document Summarization

1 code implementation ACL 2020 Danqing Wang, PengFei Liu, Yining Zheng, Xipeng Qiu, Xuanjing Huang

An intuitive way is to put them in the graph-based neural network, which has a more complex structure for capturing inter-sentence relationships.

Document Summarization Extractive Document Summarization +3

Defense against Adversarial Attacks in NLP via Dirichlet Neighborhood Ensemble

1 code implementation20 Jun 2020 Yi Zhou, Xiaoqing Zheng, Cho-Jui Hsieh, Kai-Wei Chang, Xuanjing Huang

Despite neural networks have achieved prominent performance on many natural language processing (NLP) tasks, they are vulnerable to adversarial examples.

Sentence

fastHan: A BERT-based Multi-Task Toolkit for Chinese NLP

1 code implementation ACL 2021 Zhichao Geng, Hang Yan, Xipeng Qiu, Xuanjing Huang

The joint-model is trained and evaluated on 13 corpora of four tasks, yielding near state-of-the-art (SOTA) performance in dependency parsing and NER, achieving SOTA performance in CWS and POS.

Chinese Word Segmentation Dependency Parsing +6

CoLAKE: Contextualized Language and Knowledge Embedding

1 code implementation COLING 2020 Tianxiang Sun, Yunfan Shao, Xipeng Qiu, Qipeng Guo, Yaru Hu, Xuanjing Huang, Zheng Zhang

With the emerging branch of incorporating factual knowledge into pre-trained language models such as BERT, most existing models consider shallow, static, and separately pre-trained entity embeddings, which limits the performance gains of these models.

Entity Embeddings Knowledge Graph Completion +1

CDEvalSumm: An Empirical Study of Cross-Dataset Evaluation for Neural Summarization Systems

2 code implementations Findings of the Association for Computational Linguistics 2020 Yiran Chen, PengFei Liu, Ming Zhong, Zi-Yi Dou, Danqing Wang, Xipeng Qiu, Xuanjing Huang

In this paper, we perform an in-depth analysis of characteristics of different datasets and investigate the performance of different summarization models under a cross-dataset setting, in which a summarizer trained on one corpus will be evaluated on a range of out-of-domain corpora.

Text Summarization

Cross-Lingual Dependency Parsing by POS-Guided Word Reordering

no code implementations Findings of the Association for Computational Linguistics 2020 Lu Liu, Yi Zhou, Jianhan Xu, Xiaoqing Zheng, Kai-Wei Chang, Xuanjing Huang

The words in each sentence of a source language corpus are rearranged to meet the word order in a target language under the guidance of a part-of-speech based language model (LM).

Dependency Parsing Language Modelling +2

RethinkCWS: Is Chinese Word Segmentation a Solved Task?

1 code implementation EMNLP 2020 Jinlan Fu, PengFei Liu, Qi Zhang, Xuanjing Huang

The performance of the Chinese Word Segmentation (CWS) systems has gradually reached a plateau with the rapid development of deep neural networks, especially the successful use of large pre-trained models.

Chinese Word Segmentation

Text Information Aggregation with Centrality Attention

no code implementations16 Nov 2020 Jingjing Gong, Hang Yan, Yining Zheng, Xipeng Qiu, Xuanjing Huang

A lot of natural language processing problems need to encode the text sequence as a fix-length vector, which usually involves aggregation process of combining the representations of all the words, such as pooling or self-attention.

Sentence text-classification +1

Modeling Evolution of Message Interaction for Rumor Resolution

no code implementations COLING 2020 Lei Chen, Zhongyu Wei, Jing Li, Baohua Zhou, Qi Zhang, Xuanjing Huang

Previous work for rumor resolution concentrates on exploiting time-series characteristics or modeling topology structure separately.

Time Series Time Series Analysis

SenSeNet: Neural Keyphrase Generation with Document Structure

no code implementations12 Dec 2020 Yichao Luo, Zhengyan Li, Bingning Wang, Xiaoyu Xing, Qi Zhang, Xuanjing Huang

Keyphrase Generation (KG) is the task of generating central topics from a given document or literary work, which captures the crucial information necessary to understand the content.

Inductive Bias Keyphrase Generation +1

Topic-Oriented Spoken Dialogue Summarization for Customer Service with Saliency-Aware Topic Modeling

1 code implementation14 Dec 2020 Yicheng Zou, Lujun Zhao, Yangyang Kang, Jun Lin, Minlong Peng, Zhuoren Jiang, Changlong Sun, Qi Zhang, Xuanjing Huang, Xiaozhong Liu

In a customer service system, dialogue summarization can boost service efficiency by automatically creating summaries for long spoken dialogues in which customers and agents try to address issues about specific topics.

Uncertainty-Aware Label Refinement for Sequence Labeling

1 code implementation EMNLP 2020 Tao Gui, Jiacheng Ye, Qi Zhang, Zhengyan Li, Zichu Fei, Yeyun Gong, Xuanjing Huang

Conditional random fields (CRF) for label decoding has become ubiquitous in sequence labeling tasks.

Generating Adversarial Examples in Chinese Texts Using Sentence-Pieces

no code implementations29 Dec 2020 Linyang Li, Yunfan Shao, Demin Song, Xipeng Qiu, Xuanjing Huang

The substitutions in the generated adversarial examples are not characters or words but \textit{'pieces'}, which are more natural to Chinese readers.

Language Modelling Sentence

Alleviate Exposure Bias in Sequence Prediction \\ with Recurrent Neural Networks

no code implementations22 Mar 2021 Liping Yuan, Jiangtao Feng, Xiaoqing Zheng, Xuanjing Huang

The key idea is that at each time step, the network takes as input a ``bundle'' of similar words predicted at the previous step instead of a single ground truth.

Enhancing Scientific Papers Summarization with Citation Graph

1 code implementation7 Apr 2021 Chenxin An, Ming Zhong, Yiran Chen, Danqing Wang, Xipeng Qiu, Xuanjing Huang

Previous work for text summarization in scientific domain mainly focused on the content of the input document, but seldom considering its citation network.

Text Summarization

Larger-Context Tagging: When and Why Does It Work?

no code implementations NAACL 2021 Jinlan Fu, Liangjing Feng, Qi Zhang, Xuanjing Huang, PengFei Liu

The development of neural networks and pretraining techniques has spawned many sentence-level tagging systems that achieved superior performance on typical benchmarks.

Attribute Sentence

Who Responded to Whom: The Joint Effects of Latent Topics and Discourse in Conversation Structure

no code implementations17 Apr 2021 Lu Ji, Jing Li, Zhongyu Wei, Qi Zhang, Xuanjing Huang

Numerous online conversations are produced on a daily basis, resulting in a pressing need to conversation understanding.

Certified Robustness to Text Adversarial Attacks by Randomized [MASK]

1 code implementation8 May 2021 Jiehang Zeng, Xiaoqing Zheng, Jianhan Xu, Linyang Li, Liping Yuan, Xuanjing Huang

Recently, few certified defense methods have been developed to provably guarantee the robustness of a text classifier to adversarial synonym substitutions.

Early Exiting with Ensemble Internal Classifiers

no code implementations28 May 2021 Tianxiang Sun, Yunhua Zhou, Xiangyang Liu, Xinyu Zhang, Hao Jiang, Zhao Cao, Xuanjing Huang, Xipeng Qiu

In this paper, we show that a novel objective function for the training of the ensemble internal classifiers can be naturally induced from the perspective of ensemble learning and information theory.

Ensemble Learning

Accelerating BERT Inference for Sequence Labeling via Early-Exit

1 code implementation ACL 2021 Xiaonan Li, Yunfan Shao, Tianxiang Sun, Hang Yan, Xipeng Qiu, Xuanjing Huang

To alleviate this problem, we extend the recent successful early-exit mechanism to accelerate the inference of PTMs for sequence labeling tasks.

Sentence

Exploration and Exploitation: Two Ways to Improve Chinese Spelling Correction Models

1 code implementation ACL 2021 Chong Li, Cenyuan Zhang, Xiaoqing Zheng, Xuanjing Huang

A sequence-to-sequence learning with neural networks has empirically proven to be an effective framework for Chinese Spelling Correction (CSC), which takes a sentence with some spelling errors as input and outputs the corrected one.

Sentence Spelling Correction +1

SpanNER: Named Entity Re-/Recognition as Span Prediction

1 code implementation ACL 2021 Jinlan Fu, Xuanjing Huang, PengFei Liu

Recent years have seen the paradigm shift of Named Entity Recognition (NER) systems from sequence labeling to span prediction.

named-entity-recognition Named Entity Recognition +1

TCIC: Theme Concepts Learning Cross Language and Vision for Image Captioning

no code implementations21 Jun 2021 Zhihao Fan, Zhongyu Wei, Siyuan Wang, Ruize Wang, Zejun Li, Haijun Shan, Xuanjing Huang

Considering that theme concepts can be learned from both images and captions, we propose two settings for their representations learning based on TTN.

Image Captioning Representation Learning

SENT: Sentence-level Distant Relation Extraction via Negative Training

1 code implementation ACL 2021 Ruotian Ma, Tao Gui, Linyang Li, Qi Zhang, Yaqian Zhou, Xuanjing Huang

In this work, we propose the use of negative training (NT), in which a model is trained using complementary labels regarding that ``the instance does not belong to these complementary labels".

Relation Relation Extraction +1

Math Word Problem Solving with Explicit Numerical Values

1 code implementation ACL 2021 Qinzhuo Wu, Qi Zhang, Zhongyu Wei, Xuanjing Huang

In recent years, math word problem solving has received considerable attention and achieved promising results, but previous methods rarely take numerical values into consideration.

Math Math Word Problem Solving

Defense against Synonym Substitution-based Adversarial Attacks via Dirichlet Neighborhood Ensemble

1 code implementation ACL 2021 Yi Zhou, Xiaoqing Zheng, Cho-Jui Hsieh, Kai-Wei Chang, Xuanjing Huang

Although deep neural networks have achieved prominent performance on many NLP tasks, they are vulnerable to adversarial examples.

Sentence

Learning to Teach with Student Feedback

no code implementations10 Sep 2021 Yitao Liu, Tianxiang Sun, Xipeng Qiu, Xuanjing Huang

This one-way interaction leads to the teacher's inability to perceive the characteristics of the student and its training progress.

Knowledge Distillation

Constructing Phrase-level Semantic Labels to Form Multi-Grained Supervision for Image-Text Retrieval

no code implementations12 Sep 2021 Zhihao Fan, Zhongyu Wei, Zejun Li, Siyuan Wang, Haijun Shan, Xuanjing Huang, Jianqing Fan

Existing research for image text retrieval mainly relies on sentence-level supervision to distinguish matched and mismatched sentences for a query image.

Representation Learning Retrieval +2

Paradigm Shift in Natural Language Processing

1 code implementation26 Sep 2021 Tianxiang Sun, Xiangyang Liu, Xipeng Qiu, Xuanjing Huang

In this paper, we review such phenomenon of paradigm shifts in recent years, highlighting several paradigms that have the potential to solve different NLP tasks.

Chunking NER +3

Template-free Prompt Tuning for Few-shot NER

1 code implementation NAACL 2022 Ruotian Ma, Xin Zhou, Tao Gui, Yiding Tan, Linyang Li, Qi Zhang, Xuanjing Huang

Prompt-based methods have been successfully applied in sentence-level few-shot learning tasks, mostly owing to the sophisticated design of templates and label words.

Few-Shot Learning Few-shot NER +1

KNN-BERT: Fine-Tuning Pre-Trained Models with KNN Classifier

1 code implementation6 Oct 2021 Linyang Li, Demin Song, Ruotian Ma, Xipeng Qiu, Xuanjing Huang

Pre-trained models are widely used in fine-tuning downstream tasks with linear classifiers optimized by the cross-entropy loss, which might face robustness and stability problems.

Contrastive Learning text-classification +1

Towards Efficient NLP: A Standard Evaluation and A Strong Baseline

1 code implementation NAACL 2022 Xiangyang Liu, Tianxiang Sun, Junliang He, Jiawen Wu, Lingling Wu, Xinyu Zhang, Hao Jiang, Zhao Cao, Xuanjing Huang, Xipeng Qiu

ELUE is dedicated to depict the Pareto Frontier for various language understanding tasks, such that it can tell whether and how much a method achieves Pareto improvement.

Plug-Tagger: A Pluggable Sequence Labeling Framework Using Language Models

no code implementations14 Oct 2021 Xin Zhou, Ruotian Ma, Tao Gui, Yiding Tan, Qi Zhang, Xuanjing Huang

Specifically, for each task, a label word set is first constructed by selecting a high-frequency word for each class respectively, and then, task-specific vectors are inserted into the inputs and optimized to manipulate the model predictions towards the corresponding label words.

Language Modelling Text Generation

Black-Box Tuning for Language-Model-as-a-Service

2 code implementations10 Jan 2022 Tianxiang Sun, Yunfan Shao, Hong Qian, Xuanjing Huang, Xipeng Qiu

In such a scenario, which we call Language-Model-as-a-Service (LMaaS), the gradients of PTMs are usually unavailable.

In-Context Learning Language Modelling

Decorrelate Irrelevant, Purify Relevant: Overcome Textual Spurious Correlations from a Feature Perspective

2 code implementations COLING 2022 Shihan Dou, Rui Zheng, Ting Wu, Songyang Gao, Junjie Shan, Qi Zhang, Yueming Wu, Xuanjing Huang

Most of the existing debiasing methods often identify and weaken these samples with biased features (i. e., superficial surface features that cause such spurious correlations).

Fact Verification Natural Language Inference +1

A Simple Hash-Based Early Exiting Approach For Language Understanding and Generation

1 code implementation Findings (ACL) 2022 Tianxiang Sun, Xiangyang Liu, Wei Zhu, Zhichao Geng, Lingling Wu, Yilong He, Yuan Ni, Guotong Xie, Xuanjing Huang, Xipeng Qiu

Previous works usually adopt heuristic metrics such as the entropy of internal outputs to measure instance difficulty, which suffers from generalization and threshold-tuning.

A Benchmark for Automatic Medical Consultation System: Frameworks, Tasks and Datasets

1 code implementation19 Apr 2022 Wei Chen, Zhiwei Li, Hongyi Fang, Qianyuan Yao, Cheng Zhong, Jianye Hao, Qi Zhang, Xuanjing Huang, Jiajie Peng, Zhongyu Wei

In recent years, interest has arisen in using machine learning to improve the efficiency of automatic medical consultation and enhance patient experience.

Dialogue Act Classification Dialogue Understanding +4

BBTv2: Towards a Gradient-Free Future with Large Language Models

1 code implementation23 May 2022 Tianxiang Sun, Zhengfu He, Hong Qian, Yunhua Zhou, Xuanjing Huang, Xipeng Qiu

By contrast, gradient-free methods only require the forward computation of the PTM to tune the prompt, retaining the benefits of efficient tuning and deployment.

Few-Shot Learning Language Modelling

What Dense Graph Do You Need for Self-Attention?

1 code implementation27 May 2022 Yuxin Wang, Chu-Tak Lee, Qipeng Guo, Zhangyue Yin, Yunhua Zhou, Xuanjing Huang, Xipeng Qiu

Transformers have made progress in miscellaneous tasks, but suffer from quadratic computational and memory complexities.

Miscellaneous

CoNT: Contrastive Neural Text Generation

2 code implementations29 May 2022 Chenxin An, Jiangtao Feng, Kai Lv, Lingpeng Kong, Xipeng Qiu, Xuanjing Huang

We validate CoNT on five generation tasks with ten benchmarks, including machine translation, summarization, code comment generation, data-to-text generation and commonsense generation.

Code Comment Generation Comment Generation +4

Causal Intervention Improves Implicit Sentiment Analysis

no code implementations COLING 2022 Siyin Wang, Jie zhou, Changzhi Sun, Junjie Ye, Tao Gui, Qi Zhang, Xuanjing Huang

In this work, we propose a causal intervention model for Implicit Sentiment Analysis using Instrumental Variable (ISAIV).

Sentence Sentiment Analysis

Locate Then Ask: Interpretable Stepwise Reasoning for Multi-hop Question Answering

1 code implementation COLING 2022 Siyuan Wang, Zhongyu Wei, Zhihao Fan, Qi Zhang, Xuanjing Huang

In this paper, we propose an interpretable stepwise reasoning framework to incorporate both single-hop supporting sentence identification and single-hop question generation at each intermediate step, and utilize the inference of the current hop for the next until reasoning out the final result.

Multi-hop Question Answering Question Answering +3

COLO: A Contrastive Learning based Re-ranking Framework for One-Stage Summarization

1 code implementation COLING 2022 Chenxin An, Ming Zhong, Zhiyong Wu, Qin Zhu, Xuanjing Huang, Xipeng Qiu

Traditional training paradigms for extractive and abstractive summarization systems always only use token-level or sentence-level training objectives.

Abstractive Text Summarization Contrastive Learning +2

Kernel-Whitening: Overcome Dataset Bias with Isotropic Sentence Embedding

1 code implementation14 Oct 2022 Songyang Gao, Shihan Dou, Qi Zhang, Xuanjing Huang

Dataset bias has attracted increasing attention recently for its detrimental effect on the generalization ability of fine-tuned models.

Sentence Sentence Embedding +2

Late Prompt Tuning: A Late Prompt Could Be Better Than Many Prompts

1 code implementation20 Oct 2022 Xiangyang Liu, Tianxiang Sun, Xuanjing Huang, Xipeng Qiu

Through extensive experimental results across various tasks and PTMs, we show that LPT can achieve competitive performance to full model tuning and other PETuning methods under both full-data and few-shot scenarios while possessing faster training speed and lower memory cost.

Robust Lottery Tickets for Pre-trained Language Models

2 code implementations ACL 2022 Rui Zheng, Rong Bao, Yuhao Zhou, Di Liang, Sirui Wang, Wei Wu, Tao Gui, Qi Zhang, Xuanjing Huang

Recent works on Lottery Ticket Hypothesis have shown that pre-trained language models (PLMs) contain smaller matching subnetworks(winning tickets) which are capable of reaching accuracy comparable to the original models.

Adversarial Robustness

Efficient Adversarial Training with Robust Early-Bird Tickets

1 code implementation14 Nov 2022 Zhiheng Xi, Rui Zheng, Tao Gui, Qi Zhang, Xuanjing Huang

Adversarial training is one of the most powerful methods to improve the robustness of pre-trained language models (PLMs).

Rethinking Label Smoothing on Multi-hop Question Answering

2 code implementations19 Dec 2022 Zhangyue Yin, Yuxin Wang, Xiannian Hu, Yiguang Wu, Hang Yan, Xinyu Zhang, Zhao Cao, Xuanjing Huang, Xipeng Qiu

Multi-Hop Question Answering (MHQA) is a significant area in question answering, requiring multiple reasoning components, including document retrieval, supporting sentence prediction, and answer span extraction.

Image Classification Machine Reading Comprehension +6

Cross-Linguistic Syntactic Difference in Multilingual BERT: How Good is It and How Does It Affect Transfer?

1 code implementation21 Dec 2022 Ningyu Xu, Tao Gui, Ruotian Ma, Qi Zhang, Jingting Ye, Menghan Zhang, Xuanjing Huang

We demonstrate that the distance between the distributions of different languages is highly consistent with the syntactic difference in terms of linguistic formalisms.

Zero-Shot Cross-Lingual Transfer

How Robust is GPT-3.5 to Predecessors? A Comprehensive Study on Language Understanding Tasks

no code implementations1 Mar 2023 Xuanting Chen, Junjie Ye, Can Zu, Nuo Xu, Rui Zheng, Minlong Peng, Jie zhou, Tao Gui, Qi Zhang, Xuanjing Huang

The GPT-3. 5 models have demonstrated impressive performance in various Natural Language Processing (NLP) tasks, showcasing their strong understanding and reasoning capabilities.

Natural Language Inference Natural Language Understanding +1

A Comprehensive Capability Analysis of GPT-3 and GPT-3.5 Series Models

no code implementations18 Mar 2023 Junjie Ye, Xuanting Chen, Nuo Xu, Can Zu, Zekai Shao, Shichun Liu, Yuhan Cui, Zeyang Zhou, Chao Gong, Yang shen, Jie zhou, Siming Chen, Tao Gui, Qi Zhang, Xuanjing Huang

GPT series models, such as GPT-3, CodeX, InstructGPT, ChatGPT, and so on, have gained considerable attention due to their exceptional natural language processing capabilities.

Natural Language Understanding

CausalAPM: Generalizable Literal Disentanglement for NLU Debiasing

no code implementations4 May 2023 Songyang Gao, Shihan Dou, Junjie Shan, Qi Zhang, Xuanjing Huang

Dataset bias, i. e., the over-reliance on dataset-specific literal heuristics, is getting increasing attention for its detrimental effect on the generalization ability of NLU models.

Causal Inference Disentanglement +2

CodeIE: Large Code Generation Models are Better Few-Shot Information Extractors

1 code implementation9 May 2023 Peng Li, Tianxiang Sun, Qiong Tang, Hang Yan, Yuanbin Wu, Xuanjing Huang, Xipeng Qiu

A common practice is to recast the task into a text-to-text format such that generative LLMs of natural language (NL-LLMs) like GPT-3 can be prompted to solve it.

Code Generation Few-Shot Learning +4

Modeling the Q-Diversity in a Min-max Play Game for Robust Optimization

1 code implementation20 May 2023 Ting Wu, Rui Zheng, Tao Gui, Qi Zhang, Xuanjing Huang

Models trained with empirical risk minimization (ERM) are revealed to easily rely on spurious correlations, resulting in poor generalization.

Out-of-Distribution Generalization text-classification +1

A Confidence-based Partial Label Learning Model for Crowd-Annotated Named Entity Recognition

1 code implementation21 May 2023 Limao Xiong, Jie zhou, Qunxi Zhu, Xiao Wang, Yuanbin Wu, Qi Zhang, Tao Gui, Xuanjing Huang, Jin Ma, Ying Shan

Particularly, we propose a Confidence-based Partial Label Learning (CPLL) method to integrate the prior confidence (given by annotators) and posterior confidences (learned by models) for crowd-annotated NER.

named-entity-recognition Named Entity Recognition +2

Self-Polish: Enhance Reasoning in Large Language Models via Problem Refinement

1 code implementation23 May 2023 Zhiheng Xi, Senjie Jin, Yuhao Zhou, Rui Zheng, Songyang Gao, Tao Gui, Qi Zhang, Xuanjing Huang

To enhance the multi-step reasoning capabilities of large language models, researchers have extensively explored prompting methods, notably the Chain-of-Thought (CoT) method which explicitly elicits human-like rationales.

GSM8K

Query Structure Modeling for Inductive Logical Reasoning Over Knowledge Graphs

1 code implementation23 May 2023 Siyuan Wang, Zhongyu Wei, Meng Han, Zhihao Fan, Haijun Shan, Qi Zhang, Xuanjing Huang

The results demonstrate the effectiveness of our method on logical reasoning over KGs in both inductive and transductive settings.

Knowledge Graphs Logical Reasoning

KNSE: A Knowledge-aware Natural Language Inference Framework for Dialogue Symptom Status Recognition

no code implementations26 May 2023 Wei Chen, Shiqi Wei, Zhongyu Wei, Xuanjing Huang

Symptom diagnosis in medical conversations aims to correctly extract both symptom entities and their status from the doctor-patient dialogue.

Natural Language Inference

Do Large Language Models Know What They Don't Know?

1 code implementation29 May 2023 Zhangyue Yin, Qiushi Sun, Qipeng Guo, Jiawen Wu, Xipeng Qiu, Xuanjing Huang

Large language models (LLMs) have a wealth of knowledge that allows them to excel in various Natural Language Processing (NLP) tasks.

In-Context Learning

Open Set Relation Extraction via Unknown-Aware Training

1 code implementation8 Jun 2023 Jun Zhao, Xin Zhao, WenYu Zhan, Qi Zhang, Tao Gui, Zhongyu Wei, Yunwen Chen, Xiang Gao, Xuanjing Huang

Inspired by text adversarial attacks, we adaptively apply small but critical perturbations to original training instances and thus synthesizing negative instances that are more likely to be mistaken by the model as known relations.

Relation Relation Extraction

From Hypergraph Energy Functions to Hypergraph Neural Networks

1 code implementation16 Jun 2023 Yuxin Wang, Quan Gan, Xipeng Qiu, Xuanjing Huang, David Wipf

Hypergraphs are a powerful abstraction for representing higher-order interactions between entities of interest.

Bilevel Optimization Node Classification

On the Universal Adversarial Perturbations for Efficient Data-free Adversarial Detection

1 code implementation27 Jun 2023 Songyang Gao, Shihan Dou, Qi Zhang, Xuanjing Huang, Jin Ma, Ying Shan

Detecting adversarial samples that are carefully crafted to fool the model is a critical step to socially-secure applications.

text-classification Text Classification

DISC-MedLLM: Bridging General Large Language Models and Real-World Medical Consultation

1 code implementation28 Aug 2023 Zhijie Bao, Wei Chen, Shengze Xiao, Kuang Ren, Jiaao Wu, Cheng Zhong, Jiajie Peng, Xuanjing Huang, Zhongyu Wei

We propose DISC-MedLLM, a comprehensive solution that leverages Large Language Models (LLMs) to provide accurate and truthful medical response in end-to-end conversational healthcare services.

Knowledge Graphs

DISC-LawLLM: Fine-tuning Large Language Models for Intelligent Legal Services

2 code implementations20 Sep 2023 Shengbin Yue, Wei Chen, Siyuan Wang, Bingxuan Li, Chenchen Shen, Shujun Liu, Yuxuan Zhou, Yao Xiao, Song Yun, Xuanjing Huang, Zhongyu Wei

We propose DISC-LawLLM, an intelligent legal system utilizing large language models (LLMs) to provide a wide range of legal services.

Legal Reasoning Retrieval

Loose lips sink ships: Mitigating Length Bias in Reinforcement Learning from Human Feedback

no code implementations8 Oct 2023 Wei Shen, Rui Zheng, WenYu Zhan, Jun Zhao, Shihan Dou, Tao Gui, Qi Zhang, Xuanjing Huang

Reinforcement learning from human feedback serves as a crucial bridge, aligning large language models with human and societal values.

Language Modelling

SpikeCLIP: A Contrastive Language-Image Pretrained Spiking Neural Network

no code implementations10 Oct 2023 Tianlong Li, Wenhao Liu, Changze Lv, Jianhan Xu, Cenyuan Zhang, Muling Wu, Xiaoqing Zheng, Xuanjing Huang

Spiking neural networks (SNNs) have demonstrated the capability to achieve comparable performance to deep neural networks (DNNs) in both visual and linguistic domains while offering the advantages of improved energy efficiency and adherence to biological plausibility.

Image Classification

RealBehavior: A Framework for Faithfully Characterizing Foundation Models' Human-like Behavior Mechanisms

no code implementations17 Oct 2023 Enyu Zhou, Rui Zheng, Zhiheng Xi, Songyang Gao, Xiaoran Fan, Zichu Fei, Jingting Ye, Tao Gui, Qi Zhang, Xuanjing Huang

Reports of human-like behaviors in foundation models are growing, with psychological theories providing enduring tools to investigate these behaviors.

Are Structural Concepts Universal in Transformer Language Models? Towards Interpretable Cross-Lingual Generalization

1 code implementation19 Oct 2023 Ningyu Xu, Qi Zhang, Jingting Ye, Menghan Zhang, Xuanjing Huang

We then propose a meta-learning-based method to learn to align conceptual spaces of different languages, which facilitates zero-shot and few-shot generalization in concept classification and also offers insights into the cross-lingual in-context learning phenomenon.

In-Context Learning Meta-Learning +1

Orthogonal Subspace Learning for Language Model Continual Learning

1 code implementation22 Oct 2023 Xiao Wang, Tianze Chen, Qiming Ge, Han Xia, Rong Bao, Rui Zheng, Qi Zhang, Tao Gui, Xuanjing Huang

In this paper, we propose orthogonal low-rank adaptation (O-LoRA), a simple and efficient approach for continual learning in language models, effectively mitigating catastrophic forgetting while learning new tasks.

Continual Learning Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.