no code implementations • EMNLP 2021 • Weinan He, Canming Huang, Yongmei Liu, Xiaodan Zhu
To better evaluate NLMs, we propose a logic-based framework that focuses on high-quality commonsense knowledge.
no code implementations • EMNLP 2021 • Dayu Li, Xiaodan Zhu, Yang Li, Suge Wang, Deyu Li, Jian Liao, Jianxing Zheng
Emotion inference in multi-turn conversations aims to predict the participant’s emotion in the next upcoming turn without knowing the participant’s response yet, and is a necessary step for applications such as dialogue planning.
no code implementations • Findings (EMNLP) 2021 • Zhan Shi, Hui Liu, Martin Renqiang Min, Christopher Malon, Li Erran Li, Xiaodan Zhu
Image captioning systems are expected to have the ability to combine individual concepts when describing scenes with concept combinations that are not observed during training.
no code implementations • 11 Oct 2024 • Zi'ou Zheng, Christopher Malon, Martin Renqiang Min, Xiaodan Zhu
When performing complex multi-step reasoning tasks, the ability of Large Language Models (LLMs) to derive structured intermediate proof steps is important for ensuring that the models truly perform the desired reasoning and for improving models' explainability.
no code implementations • 4 Oct 2024 • Chu Fei Luo, Radin Shayanfar, Rohan Bhambhoria, Samuel Dahan, Xiaodan Zhu
Misinformation, defined as false or inaccurate information, can result in significant societal harm when it is spread with malicious or even innocuous intent.
no code implementations • 3 Oct 2024 • Xianzhi Li, Ran Zmigrod, Zhiqiang Ma, Xiaomo Liu, Xiaodan Zhu
Language models are capable of memorizing detailed patterns and information, leading to a double-edged effect: they achieve impressive modeling performance on downstream tasks with the stored knowledge but also raise significant privacy concerns.
no code implementations • 12 Sep 2024 • Jonathan Li, Rohan Bhambhoria, Samuel Dahan, Xiaodan Zhu
Generative AI models, such as the GPT and Llama series, have significant potential to assist laypeople in answering legal questions.
no code implementations • 16 Jul 2024 • Haishuo Fang, Xiaodan Zhu, Iryna Gurevych
A crucial requirement for deploying LLM-based agents in real-life applications is the robustness against risky or even irreversible mistakes.
1 code implementation • 16 Jul 2024 • Tianyu Yang, Xiaodan Zhu, Iryna Gurevych
Text anonymization is crucial for sharing sensitive data while maintaining privacy.
1 code implementation • 3 Jul 2024 • Haritz Puerto, Tilek Chubakov, Xiaodan Zhu, Harish Tayyar Madabushi, Iryna Gurevych
In fact, it has been found that instruction tuning on these intermediary reasoning steps improves model performance.
no code implementations • 19 Jun 2024 • Omkar Dige, Diljot Singh, Tsz Fung Yau, Qixuan Zhang, Borna Bolandraftar, Xiaodan Zhu, Faiza Khan Khattak
In this work, we explore two unlearning methods, (1) Partitioned Contrastive Gradient Unlearning (PCGU) applied on decoder models and (2) Negation via Task Vector, to reduce social biases in state-of-the-art and open-source LMs such as LLaMA-2 and OPT.
1 code implementation • 11 Jun 2024 • Haishuo Fang, Xiaodan Zhu, Iryna Gurevych
Answering Questions over Knowledge Graphs (KGQA) is key to well-functioning autonomous language agents in various real-life applications.
1 code implementation • 7 Jun 2024 • Md Imbesat Hassan Rizvi, Xiaodan Zhu, Iryna Gurevych
In this work, we present a comprehensive study of the capability of current state-of-the-art large language models (LLMs) on spatial reasoning.
no code implementations • 28 May 2024 • Stephen Obadinma, Alia Lachana, Maia Norman, Jocelyn Rankin, Joanna Yu, Xiaodan Zhu, Darren Mastropaolo, Deval Pandya, Roxana Sultan, Elham Dolatabadi
Here, we focus on frontline crisis support, where Crisis Responders (CRs) engage in conversations for youth mental health support and assign an issue tag to each conversation.
1 code implementation • 10 May 2024 • Ilia Kuznetsov, Osama Mohammed Afzal, Koen Dercksen, Nils Dycke, Alexander Goldberg, Tom Hope, Dirk Hovy, Jonathan K. Kummerfeld, Anne Lauscher, Kevin Leyton-Brown, Sheng Lu, Mausam, Margot Mieskes, Aurélie Névéol, Danish Pruthi, Lizhen Qu, Roy Schwartz, Noah A. Smith, Thamar Solorio, Jingyan Wang, Xiaodan Zhu, Anna Rogers, Nihar B. Shah, Iryna Gurevych
We hope that our work will help set the agenda for research in machine-assisted scientific quality control in the age of AI, within the NLP community and beyond.
1 code implementation • 8 May 2024 • Chu Fei Luo, Ahmad Ghawanmeh, Xiaodan Zhu, Faiza Khan Khattak
In this work, we propose a new methodology for attacking language models with knowledge graph augmented generation.
no code implementations • 18 Apr 2024 • Rohan Bhambhoria, Samuel Dahan, Jonathan Li, Xiaodan Zhu
This study evaluates the performance of general-purpose AI, like ChatGPT, in legal question-answering tasks, highlighting significant risks to legal professionals and clients.
1 code implementation • 1 Mar 2024 • Christopher Malon, Xiaodan Zhu
We compare this "Sample & Select" method to greedy decoding, beam search, nucleus sampling, and the recently introduced hallucination avoiding decoders of DoLA, P-CRR, and S-CRR.
1 code implementation • 14 Feb 2024 • Yihao Fang, Stephen W. Thomas, Xiaodan Zhu
With the widespread adoption of large language models (LLMs) in numerous applications, the challenge of factuality and the propensity for hallucinations has emerged as a significant concern.
1 code implementation • 18 Jan 2024 • Haritz Puerto, Martin Tutek, Somak Aditya, Xiaodan Zhu, Iryna Gurevych
Reasoning is a fundamental component of language understanding.
no code implementations • 5 Jan 2024 • Stephen Obadinma, Xiaodan Zhu, Hongyu Guo
In this work, we highlight and perform a comprehensive study on calibration attacks, a form of adversarial attacks that aim to trap victim models to be heavily miscalibrated without altering their predicted labels, hence endangering the trustworthiness of the models and follow-up decision making based on their confidence.
no code implementations • 27 Oct 2023 • Maede Ashofteh Barabadi, Xiaodan Zhu, Wai Yip Chan, Amber L. Simpson, Richard K. G. Do
Understanding the progression of cancer is crucial for defining treatments for patients.
no code implementations • 12 Oct 2023 • Ethan Callanan, Amarachi Mbakwe, Antony Papadimitriou, Yulong Pei, Mathieu Sibue, Xiaodan Zhu, Zhiqiang Ma, Xiaomo Liu, Sameena Shah
Large Language Models (LLMs) have demonstrated remarkable performance on a wide range of Natural Language Processing (NLP) tasks, often matching or even beating state-of-the-art task-specific models.
1 code implementation • 3 Oct 2023 • Behzad Shayegh, Yanshuai Cao, Xiaodan Zhu, Jackie C. K. Cheung, Lili Mou
We investigate the unsupervised constituency parsing task, which organizes words and phrases of a sentence into a hierarchical structure without using linguistically annotated data.
1 code implementation • 25 Aug 2023 • Yihao Fang, Xianzhi Li, Stephen W. Thomas, Xiaodan Zhu
Open intent detection, a crucial aspect of natural language understanding, involves the identification of previously unseen intents in user-generated text.
Ranked #1 on Open Intent Detection on StackOverflow_CG
no code implementations • 6 Jul 2023 • Zi'ou Zheng, Xiaodan Zhu
We propose NatLogAttack to perform systematic attacks centring around natural logic, a classical logic formalism that is traceable back to Aristotle's syllogism and has been closely developed for natural language inference.
no code implementations • 25 May 2023 • Chu Fei Luo, Rohan Bhambhoria, Samuel Dahan, Xiaodan Zhu
Deep learning has made significant progress in the past decade, and demonstrates potential to solve problems with extensive social impact.
no code implementations • 24 May 2023 • Rohan Bhambhoria, Lei Chen, Xiaodan Zhu
To address these limitations, we propose the use of entailment-contradiction prediction in conjunction with LLMs, which allows for strong performance in a strict zero-shot setting.
1 code implementation • 23 May 2023 • Chu Fei Luo, Rohan Bhambhoria, Xiaodan Zhu, Samuel Dahan
With this task definition, automatic hate speech detection can be more closely aligned to enforceable laws, and hence assist in more rigorous enforcement of legal protections against harmful speech in public forums.
1 code implementation • 20 May 2023 • Jonathan Li, Will Aitken, Rohan Bhambhoria, Xiaodan Zhu
Parameter-efficient tuning aims to mitigate the large memory requirements of adapting pretrained language models for downstream tasks.
no code implementations • 10 May 2023 • Xianzhi Li, Samuel Chan, Xiaodan Zhu, Yulong Pei, Zhiqiang Ma, Xiaomo Liu, Sameena Shah
The most recent large language models(LLMs) such as ChatGPT and GPT-4 have shown exceptional capabilities of generalist models, achieving state-of-the-art performance on a wide range of NLP tasks with little or no adaptation.
Ranked #1 on Question Answering on ConvFinQA
no code implementations • 5 Mar 2023 • Stephen Obadinma, Hongyu Guo, Xiaodan Zhu
In this paper, we examine the effectiveness of several popular task-agnostic data augmentation techniques, i. e., EDA, Back Translation, and Mixup, when using two general parameter efficient tuning methods, P-tuning v2 and LoRA, under data scarcity.
no code implementations • 3 Mar 2023 • Hongbin Sun, Xinmei Sun, Lei Kou, Benfa Zhang, Xiaodan Zhu
In an attempt to improve the utilization efficiency of multi-energy coupling in park-level integrated energy system (PIES), promote wind power consumption and reduce carbon emissions, a low-carbon economic operation optimization model of PIES integrating flexible load and carbon trading mechanism is constructed.
1 code implementation • 7 Feb 2023 • Stephen Obadinma, Faiza Khan Khattak, Shirley Wang, Tania Sidhom, Elaine Lau, Sean Robertson, Jingcheng Niu, Winnie Au, Alif Munim, Karthik Raja K. Bhaskar, Bencheng Wei, Iris Ren, Waqar Muhammad, Erin Li, Bukola Ishola, Michael Wang, Griffin Tanner, Yu-Jia Shiah, Sean X. Zhang, Kwesi P. Apponsah, Kanishk Patel, Jaswinder Narain, Deval Pandya, Xiaodan Zhu, Frank Rudzicz, Elham Dolatabadi
Building Agent Assistants that can help improve customer service support requires inputs from industry users and their customers, as well as knowledge about state-of-the-art Natural Language Processing (NLP) technology.
no code implementations • 25 Oct 2022 • Jonathan Li, Rohan Bhambhoria, Xiaodan Zhu
Unfortunately, parameter-efficient methods perform poorly with small amounts of data, which are common in the legal domain (where data labelling costs are high).
no code implementations • 25 Oct 2022 • Xianzhi Li, Will Aitken, Xiaodan Zhu, Stephen W. Thomas
With the recent surge of NLP technologies in the financial domain, banks and other financial entities have adopted virtual agents (VA) to assist customers.
1 code implementation • 18 Oct 2022 • Mo Yu, Yi Gu, Xiaoxiao Guo, Yufei Feng, Xiaodan Zhu, Michael Greenspan, Murray Campbell, Chuang Gan
Hence, in order to achieve higher performance on our tasks, models need to effectively utilize such functional knowledge to infer the outcomes of actions, rather than relying solely on memorizing facts.
1 code implementation • 9 Mar 2022 • Yufei Feng, Xiaoyu Yang, Xiaodan Zhu, Michael Greenspan
We introduce a neuro-symbolic natural logic framework based on reinforcement learning with introspective revision.
no code implementations • 1 Jan 2022 • Rohan Bhambhoria, Hui Liu, Samuel Dahan, Xiaodan Zhu
In this work, we utilize deep learning models in the area of trademark law to shed light on the issue of likelihood of confusion between trademarks.
no code implementations • 29 Sep 2021 • Stephen Obadinma, Xiaodan Zhu, Hongyu Guo
Our studies suggest the following: most of the time curriculum learning has a negligible effect on calibration, but in certain cases under the context of limited training time and noisy data, curriculum learning can substantially reduce calibration error in a manner that cannot be explained by dynamically sampling the dataset.
1 code implementation • Findings (EMNLP) 2021 • Xiaoyu Yang, Xiaodan Zhu
Fact verification based on structured data is challenging as it requires models to understand both natural language and symbolic operations performed over tables.
no code implementations • 14 Sep 2021 • Lei Shen, Haolan Zhan, Xin Shen, Hongshen Chen, Xiaofang Zhao, Xiaodan Zhu
The training method updates parameters of a trained NCMs on two small sets with newly maintained and removed samples, respectively.
no code implementations • 8 Sep 2021 • Xiaoyu Yang, Xiaodan Zhu, Zhan Shi, Tianda Li
There have been two lines of approaches that can be used to further address the limitation: (1) unsupervised pretraining can leverage knowledge in much larger unstructured text data; (2) structured (often human-curated) knowledge has started to be considered in neural-network-based models for NLI.
1 code implementation • EMNLP 2021 • Hui Liu, Zhan Shi, Xiaodan Zhu
For the message-pair classifier, we enrich its training data by retrieving message pairs with high confidence from the disentangled sessions predicted by the session classifier.
1 code implementation • EMNLP 2021 • Jia-Chen Gu, Zhen-Hua Ling, Yu Wu, Quan Liu, Zhigang Chen, Xiaodan Zhu
This is a many-to-many semantic matching task because both contexts and personas in SPD are composed of multiple sentences.
no code implementations • ACL 2021 • Yinpei Dai, Hangyu Li, Yongbin Li, Jian Sun, Fei Huang, Luo Si, Xiaodan Zhu
Existing dialog state tracking (DST) models are trained with dialog data in a random order, neglecting rich structural information in a dataset.
1 code implementation • ACL 2021 • Zhan Shi, Hui Liu, Xiaodan Zhu
In this paper we propose a novel approach to encourage captioning models to produce more detailed captions using natural language inference, based on the motivation that, among different captions of an image, descriptive captions are more likely to entail less descriptive captions.
no code implementations • SEMEVAL 2021 • Yuxuan Zhou, Kaiyin Zhou, Xien Liu, Ji Wu, Xiaodan Zhu
This paper describes our system for verifying statements with tables at SemEval-2021 Task 9.
no code implementations • 1 Jun 2021 • Yinpei Dai, Hangyu Li, Yongbin Li, Jian Sun, Fei Huang, Luo Si, Xiaodan Zhu
Existing dialog state tracking (DST) models are trained with dialog data in a random order, neglecting rich structural information in a dataset.
Ranked #1 on Multi-domain Dialogue State Tracking on MULTIWOZ 2.1 (using extra training data)
1 code implementation • SEMEVAL 2021 • Boyuan Zheng, Xiaoyu Yang, Yu-Ping Ruan, ZhenHua Ling, Quan Liu, Si Wei, Xiaodan Zhu
Given a passage and the corresponding question, a participating system is expected to choose the correct answer from five candidates of abstract concepts in a cloze-style machine reading comprehension setup.
1 code implementation • 19 May 2021 • Jia-Chen Gu, Hui Liu, Zhen-Hua Ling, Quan Liu, Zhigang Chen, Xiaodan Zhu
Empirical studies on the Persona-Chat dataset show that the partner personas neglected in previous studies can improve the accuracy of response selection in the IMN- and BERT-based models.
1 code implementation • NAACL 2021 • Hui Liu, Danqing Zhang, Bing Yin, Xiaodan Zhu
In this paper, we explore to improve pretrained models with label hierarchies on the ZS-MTC task.
no code implementations • 7 Mar 2021 • Binyuan Hui, Xiang Shi, Ruiying Geng, Binhua Li, Yongbin Li, Jian Sun, Xiaodan Zhu
In this paper, we present the Schema Dependency guided multi-task Text-to-SQL model (SDSQL) to guide the network to effectively capture the interactions between questions and schemas.
2 code implementations • 5 Jan 2021 • Binyuan Hui, Ruiying Geng, Qiyu Ren, Binhua Li, Yongbin Li, Jian Sun, Fei Huang, Luo Si, Pengfei Zhu, Xiaodan Zhu
Semantic parsing has long been a fundamental problem in natural language processing.
Ranked #5 on Dialogue State Tracking on CoSQL
1 code implementation • 22 Dec 2020 • Chao-Hong Tan, Xiaoyu Yang, Zi'ou Zheng, Tianda Li, Yufei Feng, Jia-Chen Gu, Quan Liu, Dan Liu, Zhen-Hua Ling, Xiaodan Zhu
Task-oriented conversational modeling with unstructured knowledge access, as track 1 of the 9th Dialogue System Technology Challenges (DSTC 9), requests to build a system to generate response given dialogue history and knowledge access.
1 code implementation • COLING 2020 • Yufei Feng, Zi'ou Zheng, Quan Liu, Michael Greenspan, Xiaodan Zhu
We explore end-to-end trained differentiable models that integrate natural logic with neural networks, aiming to keep the backbone of natural language reasoning based on the natural logic formalism while introducing subsymbolic vector representations and neural components.
no code implementations • 19 Oct 2020 • Mo Yu, Xiaoxiao Guo, Yufei Feng, Xiaodan Zhu, Michael Greenspan, Murray Campbell
Commonsense reasoning simulates the human ability to make presumptions about our physical world, and it is an indispensable cornerstone in building general AI systems.
1 code implementation • EMNLP 2020 • Xiaoyu Yang, Feng Nie, Yufei Feng, Quan Liu, Zhigang Chen, Xiaodan Zhu
Built on that, we construct the graph attention verification networks, which are designed to fuse different sources of evidences from verbalized program execution, program structures, and the original statements and tables, to make the final verification decision.
1 code implementation • SEMEVAL 2020 • Xiaoyu Yang, Stephen Obadinma, Huasha Zhao, Qiong Zhang, Stan Matwin, Xiaodan Zhu
Subtask-1 aims to determine whether a given sentence is a counterfactual statement or not.
2 code implementations • SEMEVAL 2020 • Cunxiang Wang, Shuailong Liang, Yili Jin, Yilong Wang, Xiaodan Zhu, Yue Zhang
In this paper, we present SemEval-2020 Task 4, Commonsense Validation and Explanation (ComVE), which includes three subtasks, aiming to evaluate whether a system can distinguish a natural language statement that makes sense to humans from one that does not, and provide the reasons.
no code implementations • ACL 2020 • Zhan Shi, Xu Zhou, Xipeng Qiu, Xiaodan Zhu
Image captioning is a multimodal problem that has drawn extensive attention in both the natural language processing and computer vision community.
no code implementations • ACL 2020 • Yinpei Dai, Hangyu Li, Chengguang Tang, Yongbin Li, Jian Sun, Xiaodan Zhu
Existing end-to-end dialog systems perform less effectively when data is scarce.
1 code implementation • 21 Jun 2020 • Zhan Shi, Xu Zhou, Xipeng Qiu, Xiaodan Zhu
Image captioning is a multimodal problem that has drawn extensive attention in both the natural language processing and computer vision community.
no code implementations • ACL 2020 • Ruiying Geng, Binhua Li, Yongbin Li, Jian Sun, Xiaodan Zhu
This paper proposes Dynamic Memory Induction Networks (DMIN) for few-shot text classification.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Jia-Chen Gu, Zhen-Hua Ling, Quan Liu, Zhigang Chen, Xiaodan Zhu
The challenges of building knowledge-grounded retrieval-based chatbots lie in how to ground a conversation on its background knowledge and how to match response candidates with both context and knowledge simultaneously.
1 code implementation • 8 Apr 2020 • Tianda Li, Jia-Chen Gu, Xiaodan Zhu, Quan Liu, Zhen-Hua Ling, Zhiming Su, Si Wei
Disentanglement is a problem in which multiple conversations occur in the same channel simultaneously, and the listener should decide which utterance is part of the conversation he will respond to.
2 code implementations • 7 Apr 2020 • Jia-Chen Gu, Tianda Li, Quan Liu, Zhen-Hua Ling, Zhiming Su, Si Wei, Xiaodan Zhu
In this paper, we study the problem of employing pre-trained language models for multi-turn response selection in retrieval-based chatbots.
no code implementations • 6 Apr 2020 • Yufei Feng, Mo Yu, Wenhan Xiong, Xiaoxiao Guo, Jun-Jie Huang, Shiyu Chang, Murray Campbell, Michael Greenspan, Xiaodan Zhu
We propose the new problem of learning to recover reasoning chains from weakly supervised signals, i. e., the question-answer pairs.
no code implementations • 4 Apr 2020 • Jia-Chen Gu, Tianda Li, Quan Liu, Xiaodan Zhu, Zhen-Hua Ling, Yu-Ping Ruan
The NOESIS II challenge, as the Track 2 of the 8th Dialogue System Technology Challenges (DSTC 8), is the extension of DSTC 7.
Ranked #1 on Conversation Disentanglement on irc-disentanglement
2 code implementations • 20 Nov 2019 • Yongfei Liu, Bo Wan, Xiaodan Zhu, Xuming He
To address their limitations, this paper proposes a language-guided graph representation to capture the global context of grounding entities and their relations, and develop a cross-modal graph matching strategy for the multiple-phrase visual grounding task.
no code implementations • 25 Sep 2019 • Xiaoyan Li, Iluju Kiringa, Tet Yeap, Xiaodan Zhu, Yifeng Li
Identifying anomalous samples from highly complex and unstructured data is a crucial but challenging task in a variety of intelligent systems.
1 code implementation • IJCNLP 2019 • Jia-Chen Gu, Zhen-Hua Ling, Xiaodan Zhu, Quan Liu
Compared with previous persona fusion approaches which enhance the representation of a context by calculating its similarity with a given persona, the DIM model adopts a dual matching architecture, which performs interactive matching between responses and contexts and between responses and personas respectively for ranking response candidates.
1 code implementation • 15 Jul 2019 • Xiaoyan Li, Iluju Kiringa, Tet Yeap, Xiaodan Zhu, Yifeng Li
In this paper, we develop and explore deep anomaly detection techniques based on the capsule network (CapsNet) for image data.
no code implementations • NAACL 2019 • Samuel Bowman, Xiaodan Zhu
This tutorial discusses cutting-edge research on NLI, including recent advance on dataset development, cutting-edge deep learning models, and highlights from recent research on using NLI to understand capabilities and limits of deep learning models for language understanding and reasoning.
1 code implementation • 27 Apr 2019 • Tianda Li, Xiaodan Zhu, Quan Liu, Qian Chen, Zhigang Chen, Si Wei
Natural language inference (NLI) is among the most challenging tasks in natural language understanding.
no code implementations • 22 Apr 2019 • Yu-Ping Ruan, Xiaodan Zhu, Zhen-Hua Ling, Zhan Shi, Quan Liu, Si Wei
Winograd Schema Challenge (WSC) was proposed as an AI-hard problem in testing computers' intelligence on common sense representation and reasoning.
5 code implementations • IJCNLP 2019 • Ruiying Geng, Binhua Li, Yongbin Li, Xiaodan Zhu, Ping Jian, Jian Sun
Therefore, we should be able to learn a general representation of each class in the support set and then compare it to new queries.
Ranked #1 on Few-Shot Text Classification on ODIC 5-way (10-shot)
no code implementations • 27 Jan 2019 • Yu-Ping Ruan, Zhen-Hua Ling, Quan Liu, Jia-Chen Gu, Xiaodan Zhu
At this stage, two different models are proposed, i. e., a variational generative (VariGen) model and a retrieval based (Retrieval) model.
no code implementations • 7 Sep 2018 • Yihao Fang, Rong Zheng, Xiaodan Zhu
A novel logographic subword model is proposed to reinterpret logograms as abstract subwords for neural machine translation.
1 code implementation • COLING 2018 • Qian Chen, Zhen-Hua Ling, Xiaodan Zhu
This paper explores generalized pooling methods to enhance sentence embedding.
Ranked #10 on Sentiment Analysis on Yelp Fine-grained classification
no code implementations • ICLR 2018 • Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen
Modeling informal inference in natural language is very challenging.
2 code implementations • ACL 2018 • Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, Si Wei
With the availability of large annotated data, it has recently become feasible to train complex models such as neural-network-based inference models, which have shown to achieve the state-of-the-art performance.
Ranked #20 on Natural Language Inference on SNLI
2 code implementations • WS 2017 • Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, Diana Inkpen
The RepEval 2017 Shared Task aims to evaluate natural language understanding models for sentence representation, in which a sentence is represented as a fixed-length vector with neural networks and the quality of the representation is tested with a natural language inference task.
Ranked #69 on Natural Language Inference on SNLI
Natural Language Inference Natural Language Understanding +1
no code implementations • ACL 2017 • Xiaodan Zhu, Edward Grefenstette
Learning representation to model the meaning of text has been a core problem in NLP.
no code implementations • EACL 2017 • Parinaz Sobhani, Diana Inkpen, Xiaodan Zhu
Current models for stance classification often treat each target independently, but in many applications, there exist natural dependencies among targets, e. g., stance towards two or more politicians in an election or towards several brands of the same product.
no code implementations • 14 Mar 2017 • Junbei Zhang, Xiaodan Zhu, Qian Chen, Li-Rong Dai, Si Wei, Hui Jiang
The last several years have seen intensive interest in exploring neural-network-based models for machine comprehension (MC) and question answering (QA).
Ranked #39 on Question Answering on SQuAD1.1 dev
no code implementations • COLING 2016 • Yunli Wang, Yong Jin, Xiaodan Zhu, Cyril Goutte
We show that such knowledge can be used to construct better discriminative keyphrase extraction systems that do not assume a static, fixed set of keyphrases for a document.
no code implementations • 13 Nov 2016 • Quan Liu, Hui Jiang, Zhen-Hua Ling, Xiaodan Zhu, Si Wei, Yu Hu
The PDP task we investigate in this paper is a complex coreference resolution task which requires the utilization of commonsense knowledge.
Ranked #63 on Coreference Resolution on Winograd Schema Challenge
1 code implementation • 26 Oct 2016 • Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang
Distributed representation learned with neural networks has recently shown to be effective in modeling natural languages at fine granularities such as words, phrases, and even sentences.
11 code implementations • ACL 2017 • Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, Diana Inkpen
Reasoning and inference are central to human and artificial intelligence.
Ranked #30 on Natural Language Inference on SNLI
no code implementations • LREC 2016 • Saif Mohammad, Svetlana Kiritchenko, Parinaz Sobhani, Xiaodan Zhu, Colin Cherry
Apart from stance, the tweets are also annotated for whether the target of interest is the target of opinion in the tweet.
no code implementations • 24 Mar 2016 • Quan Liu, Hui Jiang, Andrew Evdokimov, Zhen-Hua Ling, Xiaodan Zhu, Si Wei, Yu Hu
We propose to use neural networks to model association between any two events in a domain.
Ranked #11 on Natural Language Understanding on PDP60
no code implementations • 29 Apr 2015 • Hongyu Guo, Xiaodan Zhu, Martin Renqiang Min
Many real-world applications are associated with structured data, where not only input but also output has interplay.
no code implementations • 16 Mar 2015 • Xiaodan Zhu, Parinaz Sobhani, Hongyu Guo
The chain-structured long short-term memory (LSTM) has showed to be effective in a wide range of problems such as speech recognition and machine translation.
no code implementations • 26 Jan 2015 • Xiaodan Zhu, Peter Turney, Daniel Lemire, André Vellino
Unlike the conventional h-index, it weights citations by how many times a reference is mentioned.
1 code implementation • SEMEVAL 2013 • Saif M. Mohammad, Svetlana Kiritchenko, Xiaodan Zhu
In this paper, we describe how we created two state-of-the-art SVM classifiers, one to detect the sentiment of messages such as tweets and SMS (message-level task) and one to detect the sentiment of a term within a submissions stood first in both tasks on tweets, obtaining an F-score of 69. 02 in the message-level task and 88. 93 in the term-level task.
no code implementations • JAMIA 2011 • Berry de Bruijn, Colin Cherry, Svetlana Kiritchenko, Joel Martin, Xiaodan Zhu
Objective: As clinical text mining continues to mature, its potential as an enabling technology for innovations in patient care and clinical research is becoming a reality.
Ranked #5 on Clinical Concept Extraction on 2010 i2b2/VA