1 code implementation • COLING 2022 • Zhongyuan Wang, YiXuan Wang, Shaolei Wang, Wanxiang Che
Supervised methods have achieved remarkable results in disfluency detection.
1 code implementation • COLING 2022 • Yuxuan Wang, Zhilin Lei, Yuqiu Ji, Wanxiang Che
Annotation conversion is an effective way to construct datasets under new annotation guidelines based on existing datasets with little human labour.
1 code implementation • COLING 2022 • Baoxin Wang, Xingyi Duan, Dayong Wu, Wanxiang Che, Zhigang Chen, Guoping Hu
The Chinese text correction (CTC) focuses on detecting and correcting Chinese spelling errors and grammatical errors.
1 code implementation • COLING 2022 • Libo Qin, Qiguang Chen, Tianbao Xie, Qian Liu, Shijue Huang, Wanxiang Che, Zhou Yu
Consistency identification in task-oriented dialog (CI-ToD) usually consists of three subtasks, aiming to identify inconsistency between current system response and current user response, dialog history and the corresponding knowledge base.
1 code implementation • ACL 2022 • Libo Qin, Qiguang Chen, Tianbao Xie, Qixin Li, Jian-Guang Lou, Wanxiang Che, Min-Yen Kan
Specifically, we employ contrastive learning, leveraging bilingual dictionaries to construct multilingual views of the same utterance, then encourage their representations to be more similar than negative example pairs, which achieves to explicitly align representations of similar sentences across languages.
1 code implementation • EMNLP 2021 • Libo Qin, Tianbao Xie, Shijue Huang, Qiguang Chen, Xiao Xu, Wanxiang Che
Consistency Identification has obtained remarkable success on open-domain dialogue, which can be used for preventing inconsistent response generation.
1 code implementation • 17 May 2023 • Libo Qin, Qiguang Chen, Xiao Xu, Yunlong Feng, Wanxiang Che
Spoken Language Understanding (SLU) is one of the core components of a task-oriented dialogue system, which aims to extract the semantic meaning of user queries (e. g., intents and slots).
no code implementations • 9 May 2023 • Bo Sun, Baoxin Wang, YiXuan Wang, Wanxiang Che, Dayong Wu, Shijin Wang, Ting Liu
Our experiments show that powerful pre-trained models perform poorly on this corpus.
no code implementations • 5 May 2023 • Yuanxing Liu, Weinan Zhang, Baohua Dong, Yan Fan, Hang Wang, Fan Feng, Yifan Chen, Ziyu Zhuang, Hengbin Cui, Yongbin Li, Wanxiang Che
In this paper, we construct a user needs-centric E-commerce conversational recommendation dataset (U-NEED) from real-world E-commerce scenarios.
no code implementations • 27 Apr 2023 • Dingzirui Wang, Longxu Dou, Wanxiang Che
In this paper, we introduce ConDA, which generates interactive questions and corresponding SQL results.
no code implementations • 19 Apr 2023 • Bohan Li, Longxu Dou, Yutai Hou, Yunlong Feng, Honglin Mu, Wanxiang Che
Prompt-based learning reformulates downstream tasks as cloze problems by combining the original input with a template.
no code implementations • 18 Apr 2023 • Yunlong Feng, Bohan Li, Libo Qin, Xiao Xu, Wanxiang Che
Cross-domain text classification aims to adapt models to a target domain that lacks labeled data.
no code implementations • 9 Apr 2023 • Wenbo Pan, Qiguang Chen, Xiao Xu, Wanxiang Che, Libo Qin
Zero-shot dialogue understanding aims to enable dialogue to track the user's needs without any training data, which has gained increasing attention.
no code implementations • 4 Feb 2023 • Bohan Li, Xinghao Wang, Xiao Xu, Yutai Hou, Yunlong Feng, Feng Wang, Wanxiang Che
Image augmentation is a common mechanism to alleviate data scarcity in computer vision.
no code implementations • 5 Jan 2023 • Bo Zheng, Zhouyang Li, Fuxuan Wei, Qiguang Chen, Libo Qin, Wanxiang Che
Multilingual spoken language understanding (SLU) consists of two sub-tasks, namely intent detection and slot filling.
1 code implementation • 3 Jan 2023 • Longxu Dou, Yan Gao, Xuqi Liu, Mingyang Pan, Dingzirui Wang, Wanxiang Che, Dechen Zhan, Min-Yen Kan, Jian-Guang Lou
In this paper, we study the problem of knowledge-intensive text-to-SQL, in which domain knowledge is necessary to parse expert questions into SQL queries over domain-specific tables.
1 code implementation • 27 Dec 2022 • Longxu Dou, Yan Gao, Mingyang Pan, Dingzirui Wang, Wanxiang Che, Dechen Zhan, Jian-Guang Lou
Text-to-SQL semantic parsing is an important NLP task, which greatly facilitates the interaction between users and the database and becomes the key component in many human-computer interaction systems.
no code implementations • 27 Dec 2022 • Dingzirui Wang, Longxu Dou, Wanxiang Che
Table-and-text hybrid question answering (HybridQA) is a widely used and challenging NLP task commonly applied in the financial and scientific domain.
no code implementations • 12 Dec 2022 • Qingfu Zhu, Xianzhen Luo, Fang Liu, Cuiyun Gao, Wanxiang Che
Natural language processing for programming, which aims to use NLP techniques to assist programming, has experienced an explosion in recent years.
1 code implementation • 10 Nov 2022 • Yiming Cui, Wanxiang Che, Shijin Wang, Ting Liu
We propose LERT, a pre-trained language model that is trained on three types of linguistic features along with the original MLM pre-training task, using a linguistically-informed pre-training (LIP) strategy.
1 code implementation • COLING 2022 • Yutai Hou, Hongyuan Dong, Xinghao Wang, Bohan Li, Wanxiang Che
Prompting method is regarded as one of the crucial progress for few-shot nature language processing.
1 code implementation • 11 Aug 2022 • Honghong Zhao, Baoxin Wang, Dayong Wu, Wanxiang Che, Zhigang Chen, Shijin Wang
In this paper, we present an overview of the CTC 2021, a Chinese text correction task for native speakers.
1 code implementation • 17 Jun 2022 • Xiao Xu, Chenfei Wu, Shachar Rosenman, Vasudev Lal, Wanxiang Che, Nan Duan
Vision-Language (VL) models with the Two-Tower architecture have dominated visual-language representation learning in recent years.
no code implementations • 25 May 2022 • Yang Xu, Yutai Hou, Wanxiang Che
Model editing aims to make post-hoc updates on specific facts in a model while leaving irrelevant knowledge unchanged.
1 code implementation • 18 Apr 2022 • Libo Qin, Qiguang Chen, Tianbao Xie, Qixin Li, Jian-Guang Lou, Wanxiang Che, Min-Yen Kan
We present Global--Local Contrastive Learning Framework (GL-CLeF) to address this shortcoming.
no code implementations • 15 Apr 2022 • Bo Sun, Baoxin Wang, Wanxiang Che, Dayong Wu, Zhigang Chen, Ting Liu
These errors have been studied extensively and are relatively simple for humans.
1 code implementation • Findings (ACL) 2022 • Yutai Hou, Cheng Chen, Xianzhen Luo, Bohan Li, Wanxiang Che
Such inverse prompting only requires a one-turn prediction for each slot type and greatly speeds up the prediction.
1 code implementation • 15 Mar 2022 • Longxu Dou, Yan Gao, Mingyang Pan, Dingzirui Wang, Wanxiang Che, Dechen Zhan, Jian-Guang Lou
Existing text-to-SQL semantic parsers are typically designed for particular settings such as handling queries that span multiple tables, domains or turns which makes them ineffective when applied to different settings.
no code implementations • 10 Feb 2022 • Baoxin Wang, Qingye Meng, Ziyue Wang, Honghong Zhao, Dayong Wu, Wanxiang Che, Shijin Wang, Zhigang Chen, Cong Liu
Knowledge graph embedding (KGE) models learn the representation of entities and relations in knowledge graphs.
Ranked #3 on
Link Property Prediction
on ogbl-wikikg2
1 code implementation • 20 Jan 2022 • Zhen Yu, Xiaosen Wang, Wanxiang Che, Kun He
Existing textual adversarial attacks usually utilize the gradient or prediction confidence to generate adversarial examples, making it hard to be deployed in real-world applications.
1 code implementation • 22 Dec 2021 • Xiao Xu, Libo Qin, Kaiji Chen, Guoxing Wu, Linlin Li, Wanxiang Che
Current researches on spoken language understanding (SLU) heavily are limited to a simple setting: the plain text-based SLU that takes the user utterance as input and generates its corresponding semantic frames (e. g., intent and slots).
2 code implementations • 6 Dec 2021 • Kaustubh D. Dhole, Varun Gangal, Sebastian Gehrmann, Aadesh Gupta, Zhenhao Li, Saad Mahamood, Abinaya Mahendiran, Simon Mille, Ashish Shrivastava, Samson Tan, Tongshuang Wu, Jascha Sohl-Dickstein, Jinho D. Choi, Eduard Hovy, Ondrej Dusek, Sebastian Ruder, Sajant Anand, Nagender Aneja, Rabin Banjade, Lisa Barthe, Hanna Behnke, Ian Berlot-Attwell, Connor Boyle, Caroline Brun, Marco Antonio Sobrevilla Cabezudo, Samuel Cahyawijaya, Emile Chapuis, Wanxiang Che, Mukund Choudhary, Christian Clauss, Pierre Colombo, Filip Cornell, Gautier Dagan, Mayukh Das, Tanay Dixit, Thomas Dopierre, Paul-Alexis Dray, Suchitra Dubey, Tatiana Ekeinhor, Marco Di Giovanni, Tanya Goyal, Rishabh Gupta, Louanes Hamla, Sang Han, Fabrice Harel-Canada, Antoine Honore, Ishan Jindal, Przemyslaw K. Joniak, Denis Kleyko, Venelin Kovatchev, Kalpesh Krishna, Ashutosh Kumar, Stefan Langer, Seungjae Ryan Lee, Corey James Levinson, Hualou Liang, Kaizhao Liang, Zhexiong Liu, Andrey Lukyanenko, Vukosi Marivate, Gerard de Melo, Simon Meoni, Maxime Meyer, Afnan Mir, Nafise Sadat Moosavi, Niklas Muennighoff, Timothy Sum Hon Mun, Kenton Murray, Marcin Namysl, Maria Obedkova, Priti Oli, Nivranshu Pasricha, Jan Pfister, Richard Plant, Vinay Prabhu, Vasile Pais, Libo Qin, Shahab Raji, Pawan Kumar Rajpoot, Vikas Raunak, Roy Rinberg, Nicolas Roberts, Juan Diego Rodriguez, Claude Roux, Vasconcellos P. H. S., Ananya B. Sai, Robin M. Schmidt, Thomas Scialom, Tshephisho Sefara, Saqib N. Shamsi, Xudong Shen, Haoyue Shi, Yiwen Shi, Anna Shvets, Nick Siegel, Damien Sileo, Jamie Simon, Chandan Singh, Roman Sitelew, Priyank Soni, Taylor Sorensen, William Soto, Aman Srivastava, KV Aditya Srivatsa, Tony Sun, Mukund Varma T, A Tabassum, Fiona Anting Tan, Ryan Teehan, Mo Tiwari, Marie Tolkiehn, Athena Wang, Zijian Wang, Gloria Wang, Zijie J. Wang, Fuxuan Wei, Bryan Wilie, Genta Indra Winata, Xinyi Wu, Witold Wydmański, Tianbao Xie, Usama Yaseen, Michael A. Yee, Jing Zhang, Yue Zhang
Data augmentation is an important component in the robustness evaluation of models in natural language processing (NLP) and in enhancing the diversity of the data they are trained on.
1 code implementation • 5 Oct 2021 • Bohan Li, Yutai Hou, Wanxiang Che
One of the main focuses of the DA methods is to improve the diversity of training data, thereby helping the model to better generalize to unseen testing data.
no code implementations • 27 Sep 2021 • Yutai Hou, Yingce Xia, Lijun Wu, Shufang Xie, Yang Fan, Jinhua Zhu, Wanxiang Che, Tao Qin, Tie-Yan Liu
We regard the DTI triplets as a sequence and use a Transformer-based model to directly generate them without using the detailed annotations of entities and relations.
1 code implementation • 23 Sep 2021 • Libo Qin, Tianbao Xie, Shijue Huang, Qiguang Chen, Xiao Xu, Wanxiang Che
Consistency Identification has obtained remarkable success on open-domain dialogue, which can be used for preventing inconsistent response generation.
1 code implementation • EMNLP 2021 • Zeming Liu, Haifeng Wang, Zheng-Yu Niu, Hua Wu, Wanxiang Che
In this paper, we provide a bilingual parallel human-to-human recommendation dialog dataset (DuRecDial 2. 0) to enable researchers to explore a challenging task of multilingual and cross-lingual conversational recommendation.
1 code implementation • EMNLP 2021 • Bo Zheng, Li Dong, Shaohan Huang, Saksham Singhal, Wanxiang Che, Ting Liu, Xia Song, Furu Wei
We find that many languages are under-represented in recent cross-lingual language models due to the limited vocabulary capacity.
no code implementations • 26 Aug 2021 • Yiming Cui, Wei-Nan Zhang, Wanxiang Che, Ting Liu, Zhigang Chen, Shijin Wang
Achieving human-level performance on some of the Machine Reading Comprehension (MRC) datasets is no longer challenging with the help of powerful Pre-trained Language Models (PLMs).
no code implementations • ACL 2021 • Jun Xu, Zeyang Lei, Haifeng Wang, Zheng-Yu Niu, Hua Wu, Wanxiang Che
Learning discrete dialog structure graph from human-human dialogs yields basic insights into the structure of conversation, and also provides background knowledge to facilitate dialog generation.
1 code implementation • ACL 2021 • Bo Zheng, Li Dong, Shaohan Huang, Wenhui Wang, Zewen Chi, Saksham Singhal, Wanxiang Che, Ting Liu, Xia Song, Furu Wei
Fine-tuning pre-trained cross-lingual language models can transfer task-specific supervision from one language to the others.
no code implementations • Joint Conference on Lexical and Computational Semantics 2021 • Ziqing Yang, Yiming Cui, Chenglei Si, Wanxiang Che, Ting Liu, Shijin Wang, Guoping Hu
Adversarial training (AT) as a regularization method has proved its effectiveness on various tasks.
1 code implementation • ACL 2021 • Libo Qin, Fuxuan Wei, Tianbao Xie, Xiao Xu, Wanxiang Che, Ting Liu
Multi-intent SLU can handle multiple intents in an utterance, which has attracted increasing attention.
Ranked #1 on
Semantic Frame Parsing
on MixATIS
1 code implementation • EMNLP (MRQA) 2021 • Ziqing Yang, Wentao Ma, Yiming Cui, Jiani Ye, Wanxiang Che, Shijin Wang
Multilingual pre-trained models have achieved remarkable performance on cross-lingual transfer learning.
no code implementations • Findings (ACL) 2021 • Yutai Hou, Yongkui Lai, Cheng Chen, Wanxiang Che, Ting Liu
However, dialogue language understanding contains two closely related tasks, i. e., intent detection and slot filling, and often benefits from jointly learning the two tasks.
1 code implementation • 10 May 2021 • Yiming Cui, Ting Liu, Wanxiang Che, Zhigang Chen, Shijin Wang
Achieving human-level performance on some of Machine Reading Comprehension (MRC) datasets is no longer challenging with the help of powerful Pre-trained Language Models (PLMs).
Ranked #1 on
Span-Extraction MRC
on ExpMRC - SQuAD (test)
1 code implementation • 4 Mar 2021 • Libo Qin, Tianbao Xie, Wanxiang Che, Ting Liu
Spoken Language Understanding (SLU) aims to extract the semantics frame of user queries, which is a core component in a task-oriented dialog system.
no code implementations • 31 Dec 2020 • Jun Xu, Zeyang Lei, Haifeng Wang, Zheng-Yu Niu, Hua Wu, Wanxiang Che, Ting Liu
Learning interpretable dialog structure from human-human dialogs yields basic insights into the structure of conversation, and also provides background knowledge to facilitate dialog generation.
5 code implementations • ACL 2021 • Yang Xu, Yiheng Xu, Tengchao Lv, Lei Cui, Furu Wei, Guoxin Wang, Yijuan Lu, Dinei Florencio, Cha Zhang, Wanxiang Che, Min Zhang, Lidong Zhou
Pre-training of text and layout has proved effective in a variety of visually-rich document understanding tasks due to its effective model architecture and the advantage of large-scale unlabeled scanned/digital-born documents.
Ranked #1 on
Key Information Extraction
on SROIE
1 code implementation • 24 Dec 2020 • Libo Qin, Zhouyang Li, Wanxiang Che, Minheng Ni, Ting Liu
The dialog context information (contextual information) and the mutual interaction information are two key factors that contribute to the two related tasks.
1 code implementation • 13 Dec 2020 • Yutai Hou, Sanyuan Chen, Wanxiang Che, Cheng Chen, Ting Liu
Slot filling, a fundamental module of spoken language understanding, often suffers from insufficient quantity and diversity of training data.
no code implementations • CONLL 2020 • Longxu Dou, Yunlong Feng, Yuqiu Ji, Wanxiang Che, Ting Liu
This paper describes our submission system (HIT-SCIR) for the CoNLL 2020 shared task: Cross-Framework and Cross-Lingual Meaning Representation Parsing.
1 code implementation • EMNLP 2020 • Shaolei Wang, Zhongyuan Wang, Wanxiang Che, Ting Liu
Most existing approaches to disfluency detection heavily rely on human-annotated corpora, which is expensive to obtain in practice.
no code implementations • 11 Oct 2020 • Yutai Hou, Yongkui Lai, Yushan Wu, Wanxiang Che, Ting Liu
In this paper, we study the few-shot multi-label classification for user intent detection.
1 code implementation • 8 Oct 2020 • Libo Qin, Tailu Liu, Wanxiang Che, Bingbing Kang, Sendong Zhao, Ting Liu
Instead of adopting the self-attention mechanism in vanilla Transformer, we propose a co-interactive module to consider the cross-impact by building a bidirectional connection between the two related tasks.
1 code implementation • 8 Oct 2020 • Dechuan Teng, Libo Qin, Wanxiang Che, Sendong Zhao, Ting Liu
In this paper, we improve Chinese spoken language understanding (SLU) by injecting word information.
no code implementations • 1 Oct 2020 • Shaolei Wang, Baoxin Wang, Jiefu Gong, Zhongyuan Wang, Xiao Hu, Xingyi Duan, Zizhuo Shen, Gang Yue, Ruiji Fu, Dayong Wu, Wanxiang Che, Shijin Wang, Guoping Hu, Ting Liu
Grammatical error diagnosis is an important task in natural language processing.
1 code implementation • EMNLP (ACL) 2021 • Wanxiang Che, Yunlong Feng, Libo Qin, Ting Liu
We introduce \texttt{N-LTP}, an open-source neural language technology platform supporting six fundamental Chinese NLP tasks: {lexical analysis} (Chinese word segmentation, part-of-speech tagging, and named entity recognition), {syntactic parsing} (dependency parsing), and {semantic parsing} (semantic dependency parsing and semantic role labeling).
3 code implementations • 17 Sep 2020 • Yutai Hou, Jiafeng Mao, Yongkui Lai, Cheng Chen, Wanxiang Che, Zhigang Chen, Ting Liu
In this paper, we present FewJoint, a novel Few-Shot Learning benchmark for NLP.
no code implementations • 16 Aug 2020 • Libo Qin, Wanxiang Che, Yangming Li, Minheng Ni, Ting Liu
In dialog system, dialog act recognition and sentiment classification are two correlative tasks to capture speakers intentions, where dialog act and sentiment can indicate the explicit and the implicit intentions separately.
no code implementations • ACL 2020 • Yangming Li, Kaisheng Yao, Libo Qin, Wanxiang Che, Xiaolong Li, Ting Liu
Data-driven approaches using neural networks have achieved promising performances in natural language generation (NLG).
no code implementations • ACL 2020 • Jun Xu, Haifeng Wang, Zheng-Yu Niu, Hua Wu, Wanxiang Che, Ting Liu
To address the challenge of policy learning in open-domain multi-turn conversation, we propose to represent prior information about dialog transitions as a graph and learn a graph grounded dialog policy, aimed at fostering a more coherent and controllable dialog.
1 code implementation • 11 Jun 2020 • Libo Qin, Minheng Ni, Yue Zhang, Wanxiang Che
Compared with the existing work, our method does not rely on bilingual sentences for training, and requires only one training process for multiple target languages.
2 code implementations • ACL 2020 • Yutai Hou, Wanxiang Che, Yongkui Lai, Zhihan Zhou, Yijia Liu, Han Liu, Ting Liu
In this paper, we explore the slot tagging with only a few labeled support sentences (a. k. a.
1 code implementation • ACL 2020 • Bo Zheng, Haoyang Wen, Yaobo Liang, Nan Duan, Wanxiang Che, Daxin Jiang, Ming Zhou, Ting Liu
Natural Questions is a new challenging machine reading comprehension benchmark with two-grained answers, which are a long answer (typically a paragraph) and a short answer (one or more entities inside the long answer).
2 code implementations • ACL 2020 • Zeming Liu, Haifeng Wang, Zheng-Yu Niu, Hua Wu, Wanxiang Che, Ting Liu
We propose a new task of conversational recommendation over multi-type dialogs, where the bots can proactively and naturally lead a conversation from a non-recommendation dialog (e. g., QA) to a recommendation dialog, taking into account user's interests and feedback.
no code implementations • 30 Apr 2020 • Libo Qin, Minheng Ni, Yue Zhang, Wanxiang Che, Yangming Li, Ting Liu
Spoken language understanding has been addressed as a supervised learning problem, where a set of training data is available for each domain.
6 code implementations • Findings of the Association for Computational Linguistics 2020 • Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shijin Wang, Guoping Hu
Bidirectional Encoder Representations from Transformers (BERT) has shown marvelous improvements across various NLP tasks, and consecutive variants have been proposed to further improve the performance of the pre-trained language models.
1 code implementation • EMNLP 2020 • Sanyuan Chen, Yutai Hou, Yiming Cui, Wanxiang Che, Ting Liu, Xiangzhan Yu
Deep pretrained language models have achieved great success in the way of pretraining first and then fine-tuning.
1 code implementation • ACL 2020 • Libo Qin, Xiao Xu, Wanxiang Che, Yue Zhang, Ting Liu
However, there has been relatively little research on how to effectively use data from all domains to improve the performance of each domain and also unseen domains.
Ranked #1 on
Task-Oriented Dialogue Systems
on Kvret
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Libo Qin, Xiao Xu, Wanxiang Che, Ting Liu
Such an interaction layer is applied to each token adaptively, which has the advantage to automatically extract the relevant intents information, making a fine-grained intent information integration for the token-level slot prediction.
1 code implementation • COLING 2020 • Yiming Cui, Ting Liu, Ziqing Yang, Zhipeng Chen, Wentao Ma, Wanxiang Che, Shijin Wang, Guoping Hu
To add diversity in this area, in this paper, we propose a new task called Sentence Cloze-style Machine Reading Comprehension (SC-MRC).
1 code implementation • ACL 2020 • Ziqing Yang, Yiming Cui, Zhipeng Chen, Wanxiang Che, Ting Liu, Shijin Wang, Guoping Hu
In this paper, we introduce TextBrewer, an open-source knowledge distillation toolkit designed for natural language processing.
no code implementations • 19 Dec 2019 • Yiming Cui, Wanxiang Che, Wei-Nan Zhang, Ting Liu, Shijin Wang, Guoping Hu
Story Ending Prediction is a task that needs to select an appropriate ending for the given story, which requires the machine to understand the story and sometimes needs commonsense knowledge.
no code implementations • 14 Nov 2019 • Yiming Cui, Wei-Nan Zhang, Wanxiang Che, Ting Liu, Zhipeng Chen, Shijin Wang, Guoping Hu
Recurrent Neural Networks (RNN) are known as powerful models for handling sequential data, and especially widely utilized in various natural language processing tasks.
no code implementations • 9 Nov 2019 • Ziqing Yang, Yiming Cui, Wanxiang Che, Ting Liu, Shijin Wang, Guoping Hu
With virtual adversarial training (VAT), we explore the possibility of improving the RC models with semi-supervised learning and prove that examples from a different task are also beneficial.
no code implementations • CONLL 2019 • Wanxiang Che, Longxu Dou, Yang Xu, Yuxuan Wang, Yijia Liu, Ting Liu
This paper describes our system (HIT-SCIR) for CoNLL 2019 shared task: Cross-Framework Meaning Representation Parsing.
Ranked #1 on
UCCA Parsing
on CoNLL 2019
1 code implementation • IJCNLP 2019 • Libo Qin, Yijia Liu, Wanxiang Che, Haoyang Wen, Yangming Li, Ting Liu
Querying the knowledge base (KB) has long been a challenge in the end-to-end task-oriented dialogue system.
Ranked #6 on
Task-Oriented Dialogue Systems
on KVRET
1 code implementation • IJCNLP 2019 • Yuxuan Wang, Wanxiang Che, Jiang Guo, Yijia Liu, Ting Liu
In this approach, a linear transformation is learned from contextual word alignments to align the contextualized embeddings independently trained in different languages.
1 code implementation • 10 Sep 2019 • Yutai Hou, Meng Fang, Wanxiang Che, Ting Liu
The framework builds a user simulator by first generating diverse dialogue data from templates and then build a new State2Seq user simulator on the data.
2 code implementations • IJCNLP 2019 • Libo Qin, Wanxiang Che, Yangming Li, Haoyang Wen, Ting Liu
In our framework, we adopt a joint model with Stack-Propagation which can directly use the intent information as input for slot filling, thus to capture the intent semantic knowledge.
Ranked #2 on
Intent Detection
on SNIPS
1 code implementation • IJCNLP 2019 • Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shijin Wang, Guoping Hu
In this paper, we propose Cross-Lingual Machine Reading Comprehension (CLMRC) task for the languages other than English.
no code implementations • 15 Aug 2019 • Shaolei Wang, Wanxiang Che, Qi Liu, Pengda Qin, Ting Liu, William Yang Wang
The pre-trained network is then fine-tuned using human-annotated disfluency detection training data.
1 code implementation • ACL 2019 • Shuhuai Ren, Yihe Deng, Kun He, Wanxiang Che
Experiments on three popular datasets using convolutional as well as LSTM models show that PWWS reduces the classification accuracy to the most extent, and keeps a very low word substitution rate.
no code implementations • 20 Jun 2019 • Yutai Hou, Zhihan Zhou, Yijia Liu, Ning Wang, Wanxiang Che, Han Liu, Ting Liu
It calculates emission score with similarity based methods and obtains transition score with a specially designed transfer mechanism.
2 code implementations • 19 Jun 2019 • Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang
To demonstrate the effectiveness of these models, we create a series of Chinese pre-trained language models as our baselines, including BERT, RoBERTa, ELECTRA, RBT, etc.
1 code implementation • IJCNLP 2019 • Yiming Cui, Ting Liu, Wanxiang Che, Li Xiao, Zhipeng Chen, Wentao Ma, Shijin Wang, Guoping Hu
Machine Reading Comprehension (MRC) has become enormously popular recently and has attracted a lot of attention.
1 code implementation • EMNLP 2018 • Yijia Liu, Wanxiang Che, Bo Zheng, Bing Qin, Ting Liu
In this paper, we propose a new rich resource enhanced AMR aligner which produces multiple alignments and a new transition system for AMR parsing along with its oracle parser.
Ranked #2 on
AMR Parsing
on LDC2014T12:
1 code implementation • CONLL 2018 • Wanxiang Che, Yijia Liu, Yuxuan Wang, Bo Zheng, Ting Liu
This paper describes our system (HIT-SCIR) submitted to the CoNLL 2018 shared task on Multilingual Parsing from Raw Text to Universal Dependencies.
Ranked #3 on
Dependency Parsing
on Universal Dependencies
1 code implementation • COLING 2018 • Yutai Hou, Yijia Liu, Wanxiang Che, Ting Liu
In this paper, we study the problem of data augmentation for language understanding in task-oriented dialogue system.
no code implementations • WS 2018 • Ruiji Fu, Zhengqi Pei, Jiefu Gong, Wei Song, Dechuan Teng, Wanxiang Che, Shijin Wang, Guoping Hu, Ting Liu
This paper describes our system at NLPTEA-2018 Task {\#}1: Chinese Grammatical Error Diagnosis.
no code implementations • COLING 2018 • Haoyang Wen, Yijia Liu, Wanxiang Che, Libo Qin, Ting Liu
Classic pipeline models for task-oriented dialogue system require explicit modeling the dialogue states and hand-crafted action spaces to query a domain-specific knowledge base.
Ranked #7 on
Task-Oriented Dialogue Systems
on KVRET
1 code implementation • ACL 2018 • Yijia Liu, Wanxiang Che, Huaipeng Zhao, Bing Qin, Ting Liu
Many natural language processing tasks can be modeled into structured prediction and solved as a search problem.
1 code implementation • NAACL 2018 • Yijia Liu, Yi Zhu, Wanxiang Che, Bing Qin, Nathan Schneider, Noah A. Smith
Nonetheless, using the new treebank, we build a pipeline system to parse raw tweets into UD.
Ranked #2 on
Dependency Parsing
on Tweebank
no code implementations • IJCNLP 2017 • Wanxiang Che, Yue Zhang
Neural networks, also with a fancy name deep learning, just right can overcome the above {``}feature engineering{''} problem.
2 code implementations • 29 Sep 2017 • Wei-Nan Zhang, Zhigang Chen, Wanxiang Che, Guoping Hu, Ting Liu
In this paper, we introduce the first evaluation of Chinese human-computer dialogue technology.
1 code implementation • EMNLP 2017 • Shaolei Wang, Wanxiang Che, Yue Zhang, Meishan Zhang, Ting Liu
In this paper, we model the problem of disfluency detection using a transition-based framework, which incrementally constructs and labels the disfluency chunk of input sentences using a new transition system without syntax information.
no code implementations • CONLL 2017 • Wanxiang Che, Jiang Guo, Yuxuan Wang, Bo Zheng, Huaipeng Zhao, Yang Liu, Dechuan Teng, Ting Liu
Our system includes three pipelined components: \textit{tokenization}, \textit{Part-of-Speech} (POS) \textit{tagging} and \textit{dependency parsing}.
no code implementations • COLING 2016 • Shaolei Wang, Wanxiang Che, Ting Liu
We treat disfluency detection as a sequence-to-sequence problem and propose a neural attention-based model which can efficiently model the long-range dependencies between words and make the resulting sentence more likely to be grammatically correct.
no code implementations • COLING 2016 • Jiang Guo, Wanxiang Che, Haifeng Wang, Ting Liu, Jun Xu
This paper describes a unified neural architecture for identifying and classifying multi-typed semantic relations between words in a sentence.
no code implementations • COLING 2016 • Jiang Guo, Wanxiang Che, Haifeng Wang, Ting Liu
Various treebanks have been released for dependency parsing.
no code implementations • WS 2016 • Bo Zheng, Wanxiang Che, Jiang Guo, Ting Liu
This paper introduces our Chinese Grammatical Error Diagnosis (CGED) system in the NLP-TEA-3 shared task for CGED.
no code implementations • 3 Jun 2016 • Jiang Guo, Wanxiang Che, Haifeng Wang, Ting Liu
Various treebanks have been released for dependency parsing.
1 code implementation • 19 Apr 2016 • Yijia Liu, Wanxiang Che, Jiang Guo, Bing Qin, Ting Liu
Many natural language processing (NLP) tasks can be generalized into segmentation problem.
no code implementations • 5 Mar 2016 • Jiang Guo, Wanxiang Che, David Yarowsky, Haifeng Wang, Ting Liu
Cross-lingual model transfer has been a promising approach for inducing dependency parsers for low-resource languages where annotated treebanks are not available.
Cross-lingual zero-shot dependency parsing
Representation Learning