no code implementations • Findings (EMNLP) 2021 • Zhan Shi, Hui Liu, Martin Renqiang Min, Christopher Malon, Li Erran Li, Xiaodan Zhu
Image captioning systems are expected to have the ability to combine individual concepts when describing scenes with concept combinations that are not observed during training.
no code implementations • EMNLP 2021 • Dayu Li, Xiaodan Zhu, Yang Li, Suge Wang, Deyu Li, Jian Liao, Jianxing Zheng
Emotion inference in multi-turn conversations aims to predict the participant’s emotion in the next upcoming turn without knowing the participant’s response yet, and is a necessary step for applications such as dialogue planning.
no code implementations • EMNLP 2021 • Weinan He, Canming Huang, Yongmei Liu, Xiaodan Zhu
To better evaluate NLMs, we propose a logic-based framework that focuses on high-quality commonsense knowledge.
1 code implementation • 9 Mar 2022 • Yufei Feng, Xiaoyu Yang, Xiaodan Zhu, Michael Greenspan
We introduce a neuro-symbolic natural logic framework based on reinforcement learning with introspective revision.
no code implementations • 1 Jan 2022 • Rohan Bhambhoria, Hui Liu, Samuel Dahan, Xiaodan Zhu
In this work, we utilize deep learning models in the area of trademark law to shed light on the issue of likelihood of confusion between trademarks.
no code implementations • 29 Sep 2021 • Stephen Obadinma, Xiaodan Zhu, Hongyu Guo
Our studies suggest the following: most of the time curriculum learning has a negligible effect on calibration, but in certain cases under the context of limited training time and noisy data, curriculum learning can substantially reduce calibration error in a manner that cannot be explained by dynamically sampling the dataset.
1 code implementation • Findings (EMNLP) 2021 • Xiaoyu Yang, Xiaodan Zhu
Fact verification based on structured data is challenging as it requires models to understand both natural language and symbolic operations performed over tables.
no code implementations • 14 Sep 2021 • Lei Shen, Haolan Zhan, Xin Shen, Hongshen Chen, Xiaofang Zhao, Xiaodan Zhu
The training method updates parameters of a trained NCMs on two small sets with newly maintained and removed samples, respectively.
no code implementations • 8 Sep 2021 • Xiaoyu Yang, Xiaodan Zhu, Zhan Shi, Tianda Li
There have been two lines of approaches that can be used to further address the limitation: (1) unsupervised pretraining can leverage knowledge in much larger unstructured text data; (2) structured (often human-curated) knowledge has started to be considered in neural-network-based models for NLI.
1 code implementation • EMNLP 2021 • Hui Liu, Zhan Shi, Xiaodan Zhu
For the message-pair classifier, we enrich its training data by retrieving message pairs with high confidence from the disentangled sessions predicted by the session classifier.
1 code implementation • EMNLP 2021 • Jia-Chen Gu, Zhen-Hua Ling, Yu Wu, Quan Liu, Zhigang Chen, Xiaodan Zhu
This is a many-to-many semantic matching task because both contexts and personas in SPD are composed of multiple sentences.
no code implementations • ACL 2021 • Yinpei Dai, Hangyu Li, Yongbin Li, Jian Sun, Fei Huang, Luo Si, Xiaodan Zhu
Existing dialog state tracking (DST) models are trained with dialog data in a random order, neglecting rich structural information in a dataset.
Dialogue State Tracking
Multi-domain Dialogue State Tracking
1 code implementation • ACL 2021 • Zhan Shi, Hui Liu, Xiaodan Zhu
In this paper we propose a novel approach to encourage captioning models to produce more detailed captions using natural language inference, based on the motivation that, among different captions of an image, descriptive captions are more likely to entail less descriptive captions.
no code implementations • SEMEVAL 2021 • Yuxuan Zhou, Kaiyin Zhou, Xien Liu, Ji Wu, Xiaodan Zhu
This paper describes our system for verifying statements with tables at SemEval-2021 Task 9.
no code implementations • 1 Jun 2021 • Yinpei Dai, Hangyu Li, Yongbin Li, Jian Sun, Fei Huang, Luo Si, Xiaodan Zhu
Existing dialog state tracking (DST) models are trained with dialog data in a random order, neglecting rich structural information in a dataset.
Ranked #1 on
Multi-domain Dialogue State Tracking
on MULTIWOZ 2.1
(using extra training data)
1 code implementation • SEMEVAL 2021 • Boyuan Zheng, Xiaoyu Yang, Yu-Ping Ruan, ZhenHua Ling, Quan Liu, Si Wei, Xiaodan Zhu
Given a passage and the corresponding question, a participating system is expected to choose the correct answer from five candidates of abstract concepts in a cloze-style machine reading comprehension setup.
1 code implementation • 19 May 2021 • Jia-Chen Gu, Hui Liu, Zhen-Hua Ling, Quan Liu, Zhigang Chen, Xiaodan Zhu
Empirical studies on the Persona-Chat dataset show that the partner personas neglected in previous studies can improve the accuracy of response selection in the IMN- and BERT-based models.
1 code implementation • NAACL 2021 • Hui Liu, Danqing Zhang, Bing Yin, Xiaodan Zhu
In this paper, we explore to improve pretrained models with label hierarchies on the ZS-MTC task.
Multi Label Text Classification
Multi-Label Text Classification
+1
no code implementations • 7 Mar 2021 • Binyuan Hui, Xiang Shi, Ruiying Geng, Binhua Li, Yongbin Li, Jian Sun, Xiaodan Zhu
In this paper, we present the Schema Dependency guided multi-task Text-to-SQL model (SDSQL) to guide the network to effectively capture the interactions between questions and schemas.
1 code implementation • 5 Jan 2021 • Binyuan Hui, Ruiying Geng, Qiyu Ren, Binhua Li, Yongbin Li, Jian Sun, Fei Huang, Luo Si, Pengfei Zhu, Xiaodan Zhu
Semantic parsing has long been a fundamental problem in natural language processing.
1 code implementation • 22 Dec 2020 • Chao-Hong Tan, Xiaoyu Yang, Zi'ou Zheng, Tianda Li, Yufei Feng, Jia-Chen Gu, Quan Liu, Dan Liu, Zhen-Hua Ling, Xiaodan Zhu
Task-oriented conversational modeling with unstructured knowledge access, as track 1 of the 9th Dialogue System Technology Challenges (DSTC 9), requests to build a system to generate response given dialogue history and knowledge access.
1 code implementation • COLING 2020 • Yufei Feng, Zi'ou Zheng, Quan Liu, Michael Greenspan, Xiaodan Zhu
We explore end-to-end trained differentiable models that integrate natural logic with neural networks, aiming to keep the backbone of natural language reasoning based on the natural logic formalism while introducing subsymbolic vector representations and neural components.
no code implementations • 19 Oct 2020 • Mo Yu, Xiaoxiao Guo, Yufei Feng, Xiaodan Zhu, Michael Greenspan, Murray Campbell
Commonsense reasoning simulates the human ability to make presumptions about our physical world, and it is an indispensable cornerstone in building general AI systems.
2 code implementations • EMNLP 2020 • Xiaoyu Yang, Feng Nie, Yufei Feng, Quan Liu, Zhigang Chen, Xiaodan Zhu
Built on that, we construct the graph attention verification networks, which are designed to fuse different sources of evidences from verbalized program execution, program structures, and the original statements and tables, to make the final verification decision.
1 code implementation • SEMEVAL 2020 • Xiaoyu Yang, Stephen Obadinma, Huasha Zhao, Qiong Zhang, Stan Matwin, Xiaodan Zhu
Subtask-1 aims to determine whether a given sentence is a counterfactual statement or not.
2 code implementations • SEMEVAL 2020 • Cunxiang Wang, Shuailong Liang, Yili Jin, Yilong Wang, Xiaodan Zhu, Yue Zhang
In this paper, we present SemEval-2020 Task 4, Commonsense Validation and Explanation (ComVE), which includes three subtasks, aiming to evaluate whether a system can distinguish a natural language statement that makes sense to humans from one that does not, and provide the reasons.
no code implementations • ACL 2020 • Yinpei Dai, Hangyu Li, Chengguang Tang, Yongbin Li, Jian Sun, Xiaodan Zhu
Existing end-to-end dialog systems perform less effectively when data is scarce.
no code implementations • ACL 2020 • Zhan Shi, Xu Zhou, Xipeng Qiu, Xiaodan Zhu
Image captioning is a multimodal problem that has drawn extensive attention in both the natural language processing and computer vision community.
1 code implementation • 21 Jun 2020 • Zhan Shi, Xu Zhou, Xipeng Qiu, Xiaodan Zhu
Image captioning is a multimodal problem that has drawn extensive attention in both the natural language processing and computer vision community.
no code implementations • ACL 2020 • Ruiying Geng, Binhua Li, Yongbin Li, Jian Sun, Xiaodan Zhu
This paper proposes Dynamic Memory Induction Networks (DMIN) for few-shot text classification.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Jia-Chen Gu, Zhen-Hua Ling, Quan Liu, Zhigang Chen, Xiaodan Zhu
The challenges of building knowledge-grounded retrieval-based chatbots lie in how to ground a conversation on its background knowledge and how to match response candidates with both context and knowledge simultaneously.
1 code implementation • 8 Apr 2020 • Tianda Li, Jia-Chen Gu, Xiaodan Zhu, Quan Liu, Zhen-Hua Ling, Zhiming Su, Si Wei
Disentanglement is a problem in which multiple conversations occur in the same channel simultaneously, and the listener should decide which utterance is part of the conversation he will respond to.
2 code implementations • 7 Apr 2020 • Jia-Chen Gu, Tianda Li, Quan Liu, Zhen-Hua Ling, Zhiming Su, Si Wei, Xiaodan Zhu
In this paper, we study the problem of employing pre-trained language models for multi-turn response selection in retrieval-based chatbots.
no code implementations • 6 Apr 2020 • Yufei Feng, Mo Yu, Wenhan Xiong, Xiaoxiao Guo, Jun-Jie Huang, Shiyu Chang, Murray Campbell, Michael Greenspan, Xiaodan Zhu
We propose the new problem of learning to recover reasoning chains from weakly supervised signals, i. e., the question-answer pairs.
no code implementations • 4 Apr 2020 • Jia-Chen Gu, Tianda Li, Quan Liu, Xiaodan Zhu, Zhen-Hua Ling, Yu-Ping Ruan
The NOESIS II challenge, as the Track 2 of the 8th Dialogue System Technology Challenges (DSTC 8), is the extension of DSTC 7.
Ranked #1 on
Conversation Disentanglement
on irc-disentanglement
2 code implementations • 20 Nov 2019 • Yongfei Liu, Bo Wan, Xiaodan Zhu, Xuming He
To address their limitations, this paper proposes a language-guided graph representation to capture the global context of grounding entities and their relations, and develop a cross-modal graph matching strategy for the multiple-phrase visual grounding task.
no code implementations • 25 Sep 2019 • Xiaoyan Li, Iluju Kiringa, Tet Yeap, Xiaodan Zhu, Yifeng Li
Identifying anomalous samples from highly complex and unstructured data is a crucial but challenging task in a variety of intelligent systems.
1 code implementation • IJCNLP 2019 • Jia-Chen Gu, Zhen-Hua Ling, Xiaodan Zhu, Quan Liu
Compared with previous persona fusion approaches which enhance the representation of a context by calculating its similarity with a given persona, the DIM model adopts a dual matching architecture, which performs interactive matching between responses and contexts and between responses and personas respectively for ranking response candidates.
1 code implementation • 15 Jul 2019 • Xiaoyan Li, Iluju Kiringa, Tet Yeap, Xiaodan Zhu, Yifeng Li
In this paper, we develop and explore deep anomaly detection techniques based on the capsule network (CapsNet) for image data.
no code implementations • NAACL 2019 • Samuel Bowman, Xiaodan Zhu
This tutorial discusses cutting-edge research on NLI, including recent advance on dataset development, cutting-edge deep learning models, and highlights from recent research on using NLI to understand capabilities and limits of deep learning models for language understanding and reasoning.
1 code implementation • 27 Apr 2019 • Tianda Li, Xiaodan Zhu, Quan Liu, Qian Chen, Zhigang Chen, Si Wei
Natural language inference (NLI) is among the most challenging tasks in natural language understanding.
no code implementations • 22 Apr 2019 • Yu-Ping Ruan, Xiaodan Zhu, Zhen-Hua Ling, Zhan Shi, Quan Liu, Si Wei
Winograd Schema Challenge (WSC) was proposed as an AI-hard problem in testing computers' intelligence on common sense representation and reasoning.
5 code implementations • IJCNLP 2019 • Ruiying Geng, Binhua Li, Yongbin Li, Xiaodan Zhu, Ping Jian, Jian Sun
Therefore, we should be able to learn a general representation of each class in the support set and then compare it to new queries.
Ranked #1 on
Few-Shot Text Classification
on ODIC 5-way (10-shot)
no code implementations • 27 Jan 2019 • Yu-Ping Ruan, Zhen-Hua Ling, Quan Liu, Jia-Chen Gu, Xiaodan Zhu
At this stage, two different models are proposed, i. e., a variational generative (VariGen) model and a retrieval based (Retrieval) model.
no code implementations • 7 Sep 2018 • Yihao Fang, Rong Zheng, Xiaodan Zhu
A novel logographic subword model is proposed to reinterpret logograms as abstract subwords for neural machine translation.
1 code implementation • COLING 2018 • Qian Chen, Zhen-Hua Ling, Xiaodan Zhu
This paper explores generalized pooling methods to enhance sentence embedding.
Ranked #10 on
Sentiment Analysis
on Yelp Fine-grained classification
no code implementations • ICLR 2018 • Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen
Modeling informal inference in natural language is very challenging.
1 code implementation • ACL 2018 • Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei
With the availability of large annotated data, it has recently become feasible to train complex models such as neural-network-based inference models, which have shown to achieve the state-of-the-art performance.
Ranked #18 on
Natural Language Inference
on SNLI
2 code implementations • WS 2017 • Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, Diana Inkpen
The RepEval 2017 Shared Task aims to evaluate natural language understanding models for sentence representation, in which a sentence is represented as a fixed-length vector with neural networks and the quality of the representation is tested with a natural language inference task.
Ranked #68 on
Natural Language Inference
on SNLI
no code implementations • ACL 2017 • Xiaodan Zhu, Edward Grefenstette
Learning representation to model the meaning of text has been a core problem in NLP.
no code implementations • EACL 2017 • Parinaz Sobhani, Diana Inkpen, Xiaodan Zhu
Current models for stance classification often treat each target independently, but in many applications, there exist natural dependencies among targets, e. g., stance towards two or more politicians in an election or towards several brands of the same product.
no code implementations • 14 Mar 2017 • Junbei Zhang, Xiaodan Zhu, Qian Chen, Li-Rong Dai, Si Wei, Hui Jiang
The last several years have seen intensive interest in exploring neural-network-based models for machine comprehension (MC) and question answering (QA).
Ranked #40 on
Question Answering
on SQuAD1.1 dev
no code implementations • COLING 2016 • Yunli Wang, Yong Jin, Xiaodan Zhu, Cyril Goutte
We show that such knowledge can be used to construct better discriminative keyphrase extraction systems that do not assume a static, fixed set of keyphrases for a document.
no code implementations • 13 Nov 2016 • Quan Liu, Hui Jiang, Zhen-Hua Ling, Xiaodan Zhu, Si Wei, Yu Hu
The PDP task we investigate in this paper is a complex coreference resolution task which requires the utilization of commonsense knowledge.
1 code implementation • 26 Oct 2016 • Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang
Distributed representation learned with neural networks has recently shown to be effective in modeling natural languages at fine granularities such as words, phrases, and even sentences.
11 code implementations • ACL 2017 • Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, Diana Inkpen
Reasoning and inference are central to human and artificial intelligence.
Ranked #27 on
Natural Language Inference
on SNLI
no code implementations • LREC 2016 • Saif Mohammad, Svetlana Kiritchenko, Parinaz Sobhani, Xiaodan Zhu, Colin Cherry
Apart from stance, the tweets are also annotated for whether the target of interest is the target of opinion in the tweet.
no code implementations • 24 Mar 2016 • Quan Liu, Hui Jiang, Andrew Evdokimov, Zhen-Hua Ling, Xiaodan Zhu, Si Wei, Yu Hu
We propose to use neural networks to model association between any two events in a domain.
no code implementations • 29 Apr 2015 • Hongyu Guo, Xiaodan Zhu, Martin Renqiang Min
Many real-world applications are associated with structured data, where not only input but also output has interplay.
no code implementations • 16 Mar 2015 • Xiaodan Zhu, Parinaz Sobhani, Hongyu Guo
The chain-structured long short-term memory (LSTM) has showed to be effective in a wide range of problems such as speech recognition and machine translation.
no code implementations • 26 Jan 2015 • Xiaodan Zhu, Peter Turney, Daniel Lemire, André Vellino
Unlike the conventional h-index, it weights citations by how many times a reference is mentioned.
1 code implementation • SEMEVAL 2013 • Saif M. Mohammad, Svetlana Kiritchenko, Xiaodan Zhu
In this paper, we describe how we created two state-of-the-art SVM classifiers, one to detect the sentiment of messages such as tweets and SMS (message-level task) and one to detect the sentiment of a term within a submissions stood first in both tasks on tweets, obtaining an F-score of 69. 02 in the message-level task and 88. 93 in the term-level task.
no code implementations • JAMIA 2011 • Berry de Bruijn, Colin Cherry, Svetlana Kiritchenko, Joel Martin, Xiaodan Zhu
Objective: As clinical text mining continues to mature, its potential as an enabling technology for innovations in patient care and clinical research is becoming a reality.
Ranked #5 on
Clinical Concept Extraction
on 2010 i2b2/VA