1 code implementation • COLING 2022 • Xin Zhou, Ruotian Ma, Yicheng Zou, Xuanting Chen, Tao Gui, Qi Zhang, Xuanjing Huang, Rui Xie, Wei Wu
Specifically, we re-formulate both token and sentence classification tasks into a unified language modeling task, and map label spaces of different tasks into the same vocabulary space.
1 code implementation • ACL 2022 • Zichu Fei, Qi Zhang, Tao Gui, Di Liang, Sirui Wang, Wei Wu, Xuanjing Huang
CQG employs a simple method to generate the multi-hop questions that contain key entities in multi-hop reasoning chains, which ensure the complexity and quality of the questions.
no code implementations • COLING 2022 • Jun Zhao, Xin Zhao, WenYu Zhan, Tao Gui, Qi Zhang, Liang Qiao, Zhanzhan Cheng, ShiLiang Pu
To deal with this problem, this work proposes a cross-document semantic enhancement method, which consists of two modules: 1) To prevent distractions from irrelevant regions in the current document, we design a learnable attention mask mechanism, which is used to adaptively filter redundant information in the current document.
no code implementations • COLING 2022 • Rui Zheng, Rong Bao, Qin Liu, Tao Gui, Qi Zhang, Xuanjing Huang, Rui Xie, Wei Wu
To reduce the potential side effects of using defense modules, we further propose a novel forgetting restricted adversarial training, which filters out bad adversarial examples that impair the performance of original ones.
no code implementations • COLING 2022 • Zichu Fei, Xin Zhou, Tao Gui, Qi Zhang, Xuanjing Huang
Existing KBQG models still face two main challenges: (1) Most models often focus on the most relevant part of the answer entity, while neglecting the rest of the subgraph.
1 code implementation • ACL 2022 • Qin Liu, Rui Zheng, Bao Rong, Jingyi Liu, Zhihua Liu, Zhanzhan Cheng, Liang Qiao, Tao Gui, Qi Zhang, Xuanjing Huang
Adversarial robustness has attracted much attention recently, and the mainstream solution is adversarial training.
1 code implementation • 23 May 2023 • Zhiheng Xi, Senjie Jin, Yuhao Zhou, Rui Zheng, Songyang Gao, Tao Gui, Qi Zhang, Xuanjing Huang
For example, with Text-davinci-003, our method boosts the performance of standard few-shot prompting by $8. 0\%$ on GSM8K and $17. 8\%$ on MultiArith; it also improves the performance of CoT by $6. 0\%$ on GSM8K and $6. 0\%$ on MathQA, respectively.
no code implementations • 22 May 2023 • Xiao Wang, Weikang Zhou, Qi Zhang, Jie zhou, Songyang Gao, Junzhe Wang, Menghan Zhang, Xiang Gao, Yunwen Chen, Tao Gui
Pretrained language models have achieved remarkable success in various natural language processing tasks.
no code implementations • 21 May 2023 • Limao Xiong, Jie zhou, Qunxi Zhu, Xiao Wang, Yuanbin Wu, Qi Zhang, Tao Gui, Xuanjing Huang, Jin Ma, Ying Shan
Particularly, we propose a Confidence-based Partial Label Learning (CPLL) method to integrate the prior confidence (given by annotators) and posterior confidences (learned by models) for crowd-annotated NER.
1 code implementation • 20 May 2023 • Ting Wu, Rui Zheng, Tao Gui, Qi Zhang, Xuanjing Huang
Models trained with empirical risk minimization (ERM) are revealed to easily rely on spurious correlations, resulting in poor generalization.
no code implementations • 11 May 2023 • Ting Wu, Jingyi Liu, Rui Zheng, Qi Zhang, Tao Gui, Xuanjing Huang
The principle of continual relation extraction~(CRE) involves adapting to emerging novel relations while preserving od knowledge.
1 code implementation • 17 Apr 2023 • Xiao Wang, Weikang Zhou, Can Zu, Han Xia, Tianze Chen, Yuansen Zhang, Rui Zheng, Junjie Ye, Qi Zhang, Tao Gui, Jihua Kang, Jingsheng Yang, Siyuan Li, Chunsai Du
Large language models have unlocked strong multi-task capabilities from reading instructive prompts.
no code implementations • 18 Mar 2023 • Junjie Ye, Xuanting Chen, Nuo Xu, Can Zu, Zekai Shao, Shichun Liu, Yuhan Cui, Zeyang Zhou, Chao Gong, Yang shen, Jie zhou, Siming Chen, Tao Gui, Qi Zhang, Xuanjing Huang
GPT series models, such as GPT-3, CodeX, InstructGPT, ChatGPT, and so on, have gained considerable attention due to their exceptional natural language processing capabilities.
no code implementations • 1 Mar 2023 • Xuanting Chen, Junjie Ye, Can Zu, Nuo Xu, Rui Zheng, Minlong Peng, Jie zhou, Tao Gui, Qi Zhang, Xuanjing Huang
The GPT-3. 5 models have demonstrated impressive performance in various Natural Language Processing (NLP) tasks, showcasing their strong understanding and reasoning capabilities.
Natural Language Inference
Natural Language Understanding
+1
no code implementations • CVPR 2023 • Yixuan Sun, Dongyang Zhao, Zhangyue Yin, Yiwen Huang, Tao Gui, Wenqiang Zhang, Weifeng Ge
The asymmetric feature learning module exploits a biased cross-attention mechanism to encode token features of source images with their target counterparts.
1 code implementation • 21 Dec 2022 • Ningyu Xu, Tao Gui, Ruotian Ma, Qi Zhang, Jingting Ye, Menghan Zhang, Xuanjing Huang
We demonstrate that the distance between the distributions of different languages is highly consistent with the syntactic difference in terms of linguistic formalisms.
1 code implementation • 14 Nov 2022 • Zhiheng Xi, Rui Zheng, Tao Gui, Qi Zhang, Xuanjing Huang
Adversarial training is one of the most powerful methods to improve the robustness of pre-trained language models (PLMs).
1 code implementation • 14 Nov 2022 • Yicheng Zou, Kaitao Song, Xu Tan, Zhongkai Fu, Qi Zhang, Dongsheng Li, Tao Gui
By analyzing this dataset, we find that a large improvement in summarization quality can be achieved by providing ground-truth omission labels for the summarization model to recover omission information, which demonstrates the importance of omission detection for omission mitigation in dialogue summarization.
1 code implementation • ACL 2022 • Rui Zheng, Rong Bao, Yuhao Zhou, Di Liang, Sirui Wang, Wei Wu, Tao Gui, Qi Zhang, Xuanjing Huang
Recent works on Lottery Ticket Hypothesis have shown that pre-trained language models (PLMs) contain smaller matching subnetworks(winning tickets) which are capable of reaching accuracy comparable to the original models.
no code implementations • 10 Oct 2022 • Ruotian Ma, Xuanting Chen, Lin Zhang, Tao Gui, Qi Zhang, Xuanjing Huang
As the categories of named entities rapidly increase in real-world applications, class-incremental learning for NER is in demand, which continually learns new entity classes while maintaining the old knowledge.
1 code implementation • COLING 2022 • Ting Wu, Tao Gui
When delving into a lower manifold to remove redundancies, RISK reveals that an extremely low-dimensional subspace with intended features can robustly represent the highly biased dataset.
no code implementations • COLING 2022 • Siyin Wang, Jie zhou, Changzhi Sun, Junjie Ye, Tao Gui, Qi Zhang, Xuanjing Huang
In this work, we propose a causal intervention model for Implicit Sentiment Analysis using Instrumental Variable (ISAIV).
1 code implementation • 7 Jun 2022 • Ruotian Ma, Yiding Tan, Xin Zhou, Xuanting Chen, Di Liang, Sirui Wang, Wei Wu, Tao Gui, Qi Zhang
Input distribution shift is one of the vital problems in unsupervised domain adaptation (UDA).
1 code implementation • ACL 2022 • Xiao Wang, Shihan Dou, Limao Xiong, Yicheng Zou, Qi Zhang, Tao Gui, Liang Qiao, Zhanzhan Cheng, Xuanjing Huang
NER model has achieved promising performance on standard NER benchmarks.
Ranked #8 on
Named Entity Recognition (NER)
on WNUT 2017
1 code implementation • Findings (ACL) 2022 • Yicheng Zou, Hongwei Liu, Tao Gui, Junzhe Wang, Qi Zhang, Meng Tang, Haixiang Li, Daniel Wang
Text semantic matching is a fundamental task that has been widely used in various scenarios, such as community question answering, information retrieval, and recommendation.
no code implementations • 14 Oct 2021 • Xin Zhou, Ruotian Ma, Tao Gui, Yiding Tan, Qi Zhang, Xuanjing Huang
Specifically, for each task, a label word set is first constructed by selecting a high-frequency word for each class respectively, and then, task-specific vectors are inserted into the inputs and optimized to manipulate the model predictions towards the corresponding label words.
1 code implementation • NAACL 2022 • Ruotian Ma, Xin Zhou, Tao Gui, Yiding Tan, Linyang Li, Qi Zhang, Xuanjing Huang
Prompt-based methods have been successfully applied in sentence-level few-shot learning tasks, mostly owing to the sophisticated design of templates and label words.
1 code implementation • EMNLP 2021 • Jun Zhao, Tao Gui, Qi Zhang, Yaqian Zhou
The clustering-based unsupervised relation discovery method has gradually become one of the important methods of open relation extraction (OpenRE).
1 code implementation • EMNLP 2021 • Jiacheng Ye, Ruijian Cai, Tao Gui, Qi Zhang
The encoder-decoder framework achieves state-of-the-art results in keyphrase generation (KG) tasks by predicting both present keyphrases that appear in the source document and absent keyphrases that do not.
1 code implementation • EMNLP 2021 • Yicheng Zou, Bolin Zhu, Xingwu Hu, Tao Gui, Qi Zhang
With the rapid increase in the volume of dialogue data from daily life, there is a growing demand for dialogue summarization.
1 code implementation • ACL 2021 • Ruotian Ma, Tao Gui, Linyang Li, Qi Zhang, Yaqian Zhou, Xuanjing Huang
In this work, we propose the use of negative training (NT), in which a model is trained using complementary labels regarding that ``the instance does not belong to these complementary labels".
1 code implementation • ACL 2021 • Hang Yan, Tao Gui, Junqi Dai, Qipeng Guo, Zheng Zhang, Xipeng Qiu
To that end, we propose to formulate the NER subtasks as an entity span sequence generation task, which can be solved by a unified sequence-to-sequence (Seq2Seq) framework.
Ranked #9 on
Nested Named Entity Recognition
on GENIA
1 code implementation • ACL 2021 • Jiacheng Ye, Tao Gui, Yichao Luo, Yige Xu, Qi Zhang
In this work, we propose a new training paradigm One2Set without predefining an order to concatenate the keyphrases.
1 code implementation • ACL 2021 • Tao Gui, Xiao Wang, Qi Zhang, Qin Liu, Yicheng Zou, Xin Zhou, Rui Zheng, Chong Zhang, Qinzhuo Wu, Jiacheng Ye, Zexiong Pang, Yongxin Zhang, Zhengyan Li, Ruotian Ma, Zichu Fei, Ruijian Cai, Jun Zhao, Xingwu Hu, Zhiheng Yan, Yiding Tan, Yuan Hu, Qiyuan Bian, Zhihua Liu, Bolin Zhu, Shan Qin, Xiaoyu Xing, Jinlan Fu, Yue Zhang, Minlong Peng, Xiaoqing Zheng, Yaqian Zhou, Zhongyu Wei, Xipeng Qiu, Xuanjing Huang
To guarantee user acceptability, all the text transformations are linguistically based, and we provide a human evaluation for each one.
1 code implementation • EMNLP 2020 • Tao Gui, Jiacheng Ye, Qi Zhang, Zhengyan Li, Zichu Fei, Yeyun Gong, Xuanjing Huang
Conditional random fields (CRF) for label decoding has become ubiquitous in sequence labeling tasks.
1 code implementation • 18 Nov 2019 • Tao Gui, Lizhi Qing, Qi Zhang, Jiacheng Ye, HangYan, Zichu Fei, Xuanjing Huang
In order to effectively reduce the impact of non-ideal auxiliary tasks on the main task, we further proposed a novel meta-learning-based multi-task learning approach, which trained the shared hidden layers on auxiliary tasks, while the meta-optimization objective was to minimize the loss on the main task, ensuring that the optimizing direction led to an improvement on the main task.
no code implementations • IJCNLP 2019 • Tao Gui, Yicheng Zou, Qi Zhang, Minlong Peng, Jinlan Fu, Zhongyu Wei, Xuanjing Huang
Recurrent neural networks (RNN) used for Chinese named entity recognition (NER) that sequentially track character and word information have achieved great success.
Ranked #13 on
Chinese Named Entity Recognition
on OntoNotes 4
Chinese Named Entity Recognition
named-entity-recognition
+2
1 code implementation • 29 May 2019 • Minlong Peng, Qi Zhang, Xiaoyu Xing, Tao Gui, Jinlan Fu, Xuanjing Huang
However, representations of unseen or rare words trained on the end task are usually poor for appreciable performance.
no code implementations • 19 Dec 2018 • Jingjing Gong, Xinchi Chen, Tao Gui, Xipeng Qiu
With these auto-switched LSTMs, our model provides a more flexible solution for multi-criteria CWS, which is also easy to transfer the learned knowledge to new criteria.
1 code implementation • 9 Nov 2018 • Tao Gui, Qi Zhang, Lujun Zhao, Yaosong Lin, Minlong Peng, Jingjing Gong, Xuanjing Huang
In recent years, long short-term memory (LSTM) has been successfully used to model sequential data of variable length.
Ranked #27 on
Sentiment Analysis
on IMDb
no code implementations • EMNLP 2018 • Tao Gui, Qi Zhang, Jingjing Gong, Minlong Peng, Di Liang, Keyu Ding, Xuanjing Huang
However, from a linguistic perspective, Twitter users not only tend to mimic the formal expressions of traditional media, like news, but they also appear to be developing linguistically informal styles.
Ranked #2 on
Part-Of-Speech Tagging
on Ritter
no code implementations • COLING 2018 • Yicheng Zou, Tao Gui, Qi Zhang, Xuanjing Huang
Attention mechanisms have been leveraged for sentiment classification tasks because not all words have the same importance.
no code implementations • EMNLP 2017 • Tao Gui, Qi Zhang, Haoran Huang, Minlong Peng, Xuanjing Huang
In this work, we study the problem of part-of-speech tagging for Tweets.
Ranked #3 on
Part-Of-Speech Tagging
on Ritter