1 code implementation • EMNLP 2021 • Jihao Shi, Xiao Ding, Li Du, Ting Liu, Bing Qin
Many open-domain question answering problems can be cast as a textual entailment task, where a question and candidate answers are concatenated to form hypotheses.
1 code implementation • COLING 2022 • Xiao Ding, Bowen Chen, Li Du, Bing Qin, Ting Liu
To fill the gap, we propose CogBERT, a framework that can induce fine-grained cognitive features from cognitive data and incorporate cognitive features into BERT by adaptively adjusting the weight of cognitive features for different NLP tasks.
1 code implementation • 16 Dec 2022 • Kai Xiong, Xiao Ding, Zhongyang Li, Li Du, Bing Qin, Yi Zheng, Baoxing Huai
Causal chain reasoning (CCR) is an essential ability for many decision-making AI systems, which requires the model to build reliable causal chains by connecting causal pairs.
no code implementations • 21 Aug 2022 • Tingting Wu, Xiao Ding, Hao Zhang, Jinglong Gao, Li Du, Bing Qin, Ting Liu
To relieve this issue, curriculum learning is proposed to improve model performance and generalization by ordering training samples in a meaningful (e. g., easy to hard) sequence.
no code implementations • 14 Aug 2022 • Bowen Chen, Xiao Ding, Li Du, Qin Bing, Ting Liu
Given a task, human learns from easy to hard, whereas the model learns randomly.
no code implementations • Findings (ACL) 2022 • Li Du, Xiao Ding, Yue Zhang, Kai Xiong, Ting Liu, Bing Qin
To this end, we incorporate an additional structured variable into BERT to learn to predict the event connections in the training process.
1 code implementation • ACL 2022 • Li Du, Xiao Ding, Kai Xiong, Ting Liu, Bing Qin
Understanding causality has vital importance for various Natural Language Processing (NLP) applications.
no code implementations • 21 Jan 2022 • Feng Ren, Xiao Ding, Min Zheng, Mikhail Korzinkin, Xin Cai, Wei Zhu, Alexey Mantsyzov, Alex Aliper, Vladimir Aladinskiy, Zhongying Cao, Shanshan Kong, Xi Long, Bonnie Hei Man Liu, Yingtao Liu, Vladimir Naumov, Anastasia Shneyderman, Ivan V. Ozerov, Ju Wang, Frank W. Pun, Alan Aspuru-Guzik, Michael Levitt, Alex Zhavoronkov
The AlphaFold computer program predicted protein structures for the whole human genome, which has been considered as a remarkable breakthrough both in artificial intelligence (AI) application and structural biology.
1 code implementation • ACL 2021 • Li Du, Xiao Ding, Ting Liu, Bing Qin
Abductive reasoning aims at inferring the most plausible explanation for observed events, which would play critical roles in various NLP applications, such as reading comprehension and question answering.
1 code implementation • ACL 2021 • Li Du, Xiao Ding, Kai Xiong, Ting Liu, Bing Qin
ExCAR first acquires additional evidence information from a large-scale causal event graph as logical rules for causal reasoning.
no code implementations • 21 Jul 2021 • Zhongyang Li, Xiao Ding, Ting Liu, J. Edward Hu, Benjamin Van Durme
We present a conditional text generation framework that posits sentential expressions of possible causes and effects.
no code implementations • 21 Jul 2021 • Zhongyang Li, Xiao Ding, Kuo Liao, Bing Qin, Ting Liu
Recent work has shown success in incorporating pre-trained models like BERT to improve NLP systems.
no code implementations • SEMEVAL 2020 • Xiao Ding, Dingkui Hao, Yuewei Zhang, Kuo Liao, Zhongyang Li, Bing Qin, Ting Liu
In this task, we dedicate to detecting causation, especially counterfactuals from texts.
no code implementations • IJCNLP 2019 • Li Du, Xiao Ding, Ting Liu, Zhongyang Li
Understanding event and event-centered commonsense reasoning are crucial for natural language processing (NLP).
1 code implementation • IJCNLP 2019 • Xiao Ding, Kuo Liao, Ting Liu, Zhongyang Li, Junwen Duan
Prior work has proposed effective methods to learn event representations that can capture syntactic and semantic information over text corpus, demonstrating their effectiveness for downstream tasks such as script event prediction.
no code implementations • 18 Jul 2019 • Xiao Ding, Zhongyang Li, Ting Liu, Kuo Liao
The evolution and development of events have their own basic principles, which make events happen sequentially.
1 code implementation • 17 May 2019 • Zhongyang Li, Xiao Ding, Ting Liu
In this study, we investigate a transferable BERT (TransBERT) training framework, which can transfer not only general language knowledge from large-scale unlabeled data but also specific kinds of knowledge from various semantically related supervised tasks, for a target task.
no code implementations • COLING 2018 • Zhongyang Li, Xiao Ding, Ting Liu
In this paper, we propose using adversarial training augmented Seq2Seq model to generate reasonable and diversified story endings given a story context.
1 code implementation • COLING 2018 • Junwen Duan, Yue Zhang, Xiao Ding, Ching-Yun Chang, Ting Liu
The model uses a target-sensitive representation of the news abstract to weigh sentences in the news content, so as to select and combine the most informative sentences for market modeling.
no code implementations • NAACL 2018 • Junwen Duan, Xiao Ding, Ting Liu
To address above issues, we propose a reinforcement learning based approach, which automatically induces target-specific sentence representations over tree structures.
1 code implementation • 14 May 2018 • Zhongyang Li, Xiao Ding, Ting Liu
Script event prediction requires a model to predict the subsequent event given an existing event context.
no code implementations • COLING 2016 • Xiao Ding, Yue Zhang, Ting Liu, Junwen Duan
Representing structured events as vectors in continuous space offers a new way for defining dense features for natural language processing (NLP) applications.
no code implementations • 10 Oct 2016 • Xiaofei Sun, Jiang Guo, Xiao Ding, Ting Liu
This paper investigates the problem of network embedding, which aims at learning low-dimensional vector representation of nodes in networks.