1 code implementation • ACL 2022 • Libo Qin, Qiguang Chen, Tianbao Xie, Qixin Li, Jian-Guang Lou, Wanxiang Che, Min-Yen Kan
Specifically, we employ contrastive learning, leveraging bilingual dictionaries to construct multilingual views of the same utterance, then encourage their representations to be more similar than negative example pairs, which achieves to explicitly align representations of similar sentences across languages.
1 code implementation • EMNLP 2021 • Libo Qin, Tianbao Xie, Shijue Huang, Qiguang Chen, Xiao Xu, Wanxiang Che
Consistency Identification has obtained remarkable success on open-domain dialogue, which can be used for preventing inconsistent response generation.
1 code implementation • COLING 2022 • Libo Qin, Qiguang Chen, Tianbao Xie, Qian Liu, Shijue Huang, Wanxiang Che, Zhou Yu
Consistency identification in task-oriented dialog (CI-ToD) usually consists of three subtasks, aiming to identify inconsistency between current system response and current user response, dialog history and the corresponding knowledge base.
no code implementations • 5 Jan 2023 • Bo Zheng, Zhouyang Li, Fuxuan Wei, Qiguang Chen, Libo Qin, Wanxiang Che
Multilingual spoken language understanding (SLU) consists of two sub-tasks, namely intent detection and slot filling.
1 code implementation • 15 Nov 2022 • Linyi Yang, Shuibai Zhang, Libo Qin, Yafu Li, Yidong Wang, Hanmeng Liu, Jindong Wang, Xing Xie, Yue Zhang
Pre-trained language models (PLMs) are known to improve the generalization performance of natural language understanding models by leveraging large amounts of data during the pre-training phase.
Natural Language Understanding
Out-of-Distribution Generalization
1 code implementation • 18 Apr 2022 • Libo Qin, Qiguang Chen, Tianbao Xie, Qixin Li, Jian-Guang Lou, Wanxiang Che, Min-Yen Kan
We present Global--Local Contrastive Learning Framework (GL-CLeF) to address this shortcoming.
no code implementations • SIGDIAL (ACL) 2022 • Zhi Chen, Lu Chen, Bei Chen, Libo Qin, Yuncong Liu, Su Zhu, Jian-Guang Lou, Kai Yu
With the development of pre-trained language models, remarkable success has been witnessed in dialogue understanding (DU).
1 code implementation • 22 Dec 2021 • Xiao Xu, Libo Qin, Kaiji Chen, Guoxing Wu, Linlin Li, Wanxiang Che
Current researches on spoken language understanding (SLU) heavily are limited to a simple setting: the plain text-based SLU that takes the user utterance as input and generates its corresponding semantic frames (e. g., intent and slots).
2 code implementations • 6 Dec 2021 • Kaustubh D. Dhole, Varun Gangal, Sebastian Gehrmann, Aadesh Gupta, Zhenhao Li, Saad Mahamood, Abinaya Mahendiran, Simon Mille, Ashish Shrivastava, Samson Tan, Tongshuang Wu, Jascha Sohl-Dickstein, Jinho D. Choi, Eduard Hovy, Ondrej Dusek, Sebastian Ruder, Sajant Anand, Nagender Aneja, Rabin Banjade, Lisa Barthe, Hanna Behnke, Ian Berlot-Attwell, Connor Boyle, Caroline Brun, Marco Antonio Sobrevilla Cabezudo, Samuel Cahyawijaya, Emile Chapuis, Wanxiang Che, Mukund Choudhary, Christian Clauss, Pierre Colombo, Filip Cornell, Gautier Dagan, Mayukh Das, Tanay Dixit, Thomas Dopierre, Paul-Alexis Dray, Suchitra Dubey, Tatiana Ekeinhor, Marco Di Giovanni, Tanya Goyal, Rishabh Gupta, Louanes Hamla, Sang Han, Fabrice Harel-Canada, Antoine Honore, Ishan Jindal, Przemyslaw K. Joniak, Denis Kleyko, Venelin Kovatchev, Kalpesh Krishna, Ashutosh Kumar, Stefan Langer, Seungjae Ryan Lee, Corey James Levinson, Hualou Liang, Kaizhao Liang, Zhexiong Liu, Andrey Lukyanenko, Vukosi Marivate, Gerard de Melo, Simon Meoni, Maxime Meyer, Afnan Mir, Nafise Sadat Moosavi, Niklas Muennighoff, Timothy Sum Hon Mun, Kenton Murray, Marcin Namysl, Maria Obedkova, Priti Oli, Nivranshu Pasricha, Jan Pfister, Richard Plant, Vinay Prabhu, Vasile Pais, Libo Qin, Shahab Raji, Pawan Kumar Rajpoot, Vikas Raunak, Roy Rinberg, Nicolas Roberts, Juan Diego Rodriguez, Claude Roux, Vasconcellos P. H. S., Ananya B. Sai, Robin M. Schmidt, Thomas Scialom, Tshephisho Sefara, Saqib N. Shamsi, Xudong Shen, Haoyue Shi, Yiwen Shi, Anna Shvets, Nick Siegel, Damien Sileo, Jamie Simon, Chandan Singh, Roman Sitelew, Priyank Soni, Taylor Sorensen, William Soto, Aman Srivastava, KV Aditya Srivatsa, Tony Sun, Mukund Varma T, A Tabassum, Fiona Anting Tan, Ryan Teehan, Mo Tiwari, Marie Tolkiehn, Athena Wang, Zijian Wang, Gloria Wang, Zijie J. Wang, Fuxuan Wei, Bryan Wilie, Genta Indra Winata, Xinyi Wu, Witold Wydmański, Tianbao Xie, Usama Yaseen, Michael A. Yee, Jing Zhang, Yue Zhang
Data augmentation is an important component in the robustness evaluation of models in natural language processing (NLP) and in enhancing the diversity of the data they are trained on.
1 code implementation • 23 Sep 2021 • Libo Qin, Tianbao Xie, Shijue Huang, Qiguang Chen, Xiao Xu, Wanxiang Che
Consistency Identification has obtained remarkable success on open-domain dialogue, which can be used for preventing inconsistent response generation.
1 code implementation • 15 Jul 2021 • Liang Xu, Xiaojing Lu, Chenyang Yuan, Xuanwei Zhang, Huilin Xu, Hu Yuan, Guoao Wei, Xiang Pan, Xin Tian, Libo Qin, Hu Hai
While different learning schemes -- fine-tuning, zero-shot, and few-shot learning -- have been widely explored and compared for languages such as English, there is comparatively little work in Chinese to fairly and comprehensively evaluate and compare these methods and thus hinders cumulative progress.
1 code implementation • ACL 2021 • Libo Qin, Fuxuan Wei, Tianbao Xie, Xiao Xu, Wanxiang Che, Ting Liu
Multi-intent SLU can handle multiple intents in an utterance, which has attracted increasing attention.
Ranked #1 on
Semantic Frame Parsing
on MixATIS
1 code implementation • ACL 2021 • Xiachong Feng, Xiaocheng Feng, Libo Qin, Bing Qin, Ting Liu
Current dialogue summarization systems usually encode the text with a number of general semantic features (e. g., keywords and topics) to gain more powerful dialogue modeling capabilities.
1 code implementation • 4 Mar 2021 • Libo Qin, Tianbao Xie, Wanxiang Che, Ting Liu
Spoken Language Understanding (SLU) aims to extract the semantics frame of user queries, which is a core component in a task-oriented dialog system.
1 code implementation • 24 Dec 2020 • Libo Qin, Zhouyang Li, Wanxiang Che, Minheng Ni, Ting Liu
The dialog context information (contextual information) and the mutual interaction information are two key factors that contribute to the two related tasks.
1 code implementation • 8 Oct 2020 • Libo Qin, Tailu Liu, Wanxiang Che, Bingbing Kang, Sendong Zhao, Ting Liu
Instead of adopting the self-attention mechanism in vanilla Transformer, we propose a co-interactive module to consider the cross-impact by building a bidirectional connection between the two related tasks.
1 code implementation • 8 Oct 2020 • Dechuan Teng, Libo Qin, Wanxiang Che, Sendong Zhao, Ting Liu
In this paper, we improve Chinese spoken language understanding (SLU) by injecting word information.
1 code implementation • EMNLP (ACL) 2021 • Wanxiang Che, Yunlong Feng, Libo Qin, Ting Liu
We introduce \texttt{N-LTP}, an open-source neural language technology platform supporting six fundamental Chinese NLP tasks: {lexical analysis} (Chinese word segmentation, part-of-speech tagging, and named entity recognition), {syntactic parsing} (dependency parsing), and {semantic parsing} (semantic dependency parsing and semantic role labeling).
no code implementations • 16 Aug 2020 • Libo Qin, Wanxiang Che, Yangming Li, Minheng Ni, Ting Liu
In dialog system, dialog act recognition and sentiment classification are two correlative tasks to capture speakers intentions, where dialog act and sentiment can indicate the explicit and the implicit intentions separately.
1 code implementation • 13 Aug 2020 • Qingkai Min, Libo Qin, Zhiyang Teng, Xiao Liu, Yue Zhang
Dialogue state modules are a useful component in a task-oriented dialogue system.
no code implementations • ACL 2020 • Yangming Li, Kaisheng Yao, Libo Qin, Wanxiang Che, Xiaolong Li, Ting Liu
Data-driven approaches using neural networks have achieved promising performances in natural language generation (NLG).
1 code implementation • 11 Jun 2020 • Libo Qin, Minheng Ni, Yue Zhang, Wanxiang Che
Compared with the existing work, our method does not rely on bilingual sentences for training, and requires only one training process for multiple target languages.
no code implementations • 30 Apr 2020 • Libo Qin, Minheng Ni, Yue Zhang, Wanxiang Che, Yangming Li, Ting Liu
Spoken language understanding has been addressed as a supervised learning problem, where a set of training data is available for each domain.
1 code implementation • ACL 2020 • Libo Qin, Xiao Xu, Wanxiang Che, Yue Zhang, Ting Liu
However, there has been relatively little research on how to effectively use data from all domains to improve the performance of each domain and also unseen domains.
Ranked #1 on
Task-Oriented Dialogue Systems
on Kvret
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Libo Qin, Xiao Xu, Wanxiang Che, Ting Liu
Such an interaction layer is applied to each token adaptively, which has the advantage to automatically extract the relevant intents information, making a fine-grained intent information integration for the token-level slot prediction.
1 code implementation • IJCNLP 2019 • Libo Qin, Yijia Liu, Wanxiang Che, Haoyang Wen, Yangming Li, Ting Liu
Querying the knowledge base (KB) has long been a challenge in the end-to-end task-oriented dialogue system.
Ranked #6 on
Task-Oriented Dialogue Systems
on KVRET
2 code implementations • IJCNLP 2019 • Libo Qin, Wanxiang Che, Yangming Li, Haoyang Wen, Ting Liu
In our framework, we adopt a joint model with Stack-Propagation which can directly use the intent information as input for slot filling, thus to capture the intent semantic knowledge.
Ranked #2 on
Intent Detection
on SNIPS
no code implementations • COLING 2018 • Haoyang Wen, Yijia Liu, Wanxiang Che, Libo Qin, Ting Liu
Classic pipeline models for task-oriented dialogue system require explicit modeling the dialogue states and hand-crafted action spaces to query a domain-specific knowledge base.
Ranked #7 on
Task-Oriented Dialogue Systems
on KVRET