no code implementations • WMT (EMNLP) 2020 • Xiangpeng Wei, Ping Guo, Yunpeng Li, Xingsheng Zhang, Luxi Xing, Yue Hu
In this paper we introduce the systems IIE submitted for the WMT20 shared task on German-French news translation.
no code implementations • COLING 2022 • Yuqiang Xie, Yue Hu, Yunpeng Li, Guanqun Bi, Luxi Xing, Wei Peng
Inspired by psychology theories, we introduce global psychological state chains, which include the needs and emotions of the protagonists, to help a story generation system create more controllable and well-planned stories.
1 code implementation • COLING 2022 • Yuqiang Xie, Yue Hu, Wei Peng, Guanqun Bi, Luxi Xing
Motivations, emotions, and actions are inter-related essential factors in human activities.
1 code implementation • 24 Jun 2022 • Wei Peng, Yue Hu, Luxi Xing, Yuqiang Xie, Yajing Sun
Specifically, CFO-Net designs a feedback memory module, including strategy pool and feedback pool, to obtain emotion-aware strategy representation.
1 code implementation • 7 May 2022 • Wei Peng, Yue Hu, Yuqiang Xie, Luxi Xing, Yajing Sun
In this paper, we propose a novel cognitive framework of individual interaction.
1 code implementation • 27 Apr 2022 • Wei Peng, Yue Hu, Luxi Xing, Yuqiang Xie, Yajing Sun, Yunpeng Li
Emotional support conversation aims at reducing the emotional distress of the help-seeker, which is a new and challenging task.
1 code implementation • 7 Mar 2022 • Dingkun Long, Qiong Gao, Kuan Zou, Guangwei Xu, Pengjun Xie, Ruijie Guo, Jian Xu, Guanjun Jiang, Luxi Xing, Ping Yang
We find that the performance of retrieval models trained on dataset from general domain will inevitably decrease on specific domain.
1 code implementation • 18 Feb 2022 • Yuqiang Xie, Yue Hu, Luxi Xing, Yunpeng Li, Wei Peng, Ping Guo
To address these two issues, we propose a novel Contrastive Learning framework for Story Ending Generation (CLSEG), which has two steps: multi-aspect sampling and story-specific contrastive learning.
no code implementations • 14 Feb 2022 • Wei Peng, Yue Hu, Luxi Xing, Yuqiang Xie, Xingsheng Zhang, Yajing Sun
Intention, emotion and action are important elements in human activities.
no code implementations • 16 Jul 2021 • Yajing Sun, Yue Hu, Luxi Xing, Yuqiang Xie, Xiangpeng Wei
End-to-End intelligent neural dialogue systems suffer from the problems of generating inconsistent and repetitive responses.
no code implementations • 4 Jul 2021 • Luxi Xing, Yue Hu, Jing Yu, Yuqiang Xie, Wei Peng
It is prevalent to utilize external knowledge to help machine answer questions that need background commonsense, which faces a problem that unlimited knowledge will transmit noisy and misleading information.
no code implementations • 8 Mar 2021 • Wei Peng, Yue Hu, Jing Yu, Luxi Xing, Yuqiang Xie, Zihao Zhu, Yajing Sun
Most of the existing systems design a simple classifier to determine answerability implicitly without explicitly modeling mutual interaction and relation between the question and passage, leading to the poor performance for determining the unanswerable questions.
no code implementations • SEMEVAL 2021 • Yuqiang Xie, Luxi Xing, Wei Peng, Yue Hu
This paper introduces our systems for all three subtasks of SemEval-2021 Task 4: Reading Comprehension of Abstract Meaning.
no code implementations • COLING 2020 • Wei Peng, Yue Hu, Luxi Xing, Yuqiang Xie, Jing Yu, Yajing Sun, Xiangpeng Wei
We propose a novel Bi-directional Cognitive Knowledge Framework (BCKF) for reading comprehension from the perspective of complementary learning systems theory.
no code implementations • 20 Oct 2020 • Wei Peng, Yue Hu, Luxi Xing, Yuqiang Xie, Jing Yu, Yajing Sun, Xiangpeng Wei
We propose a novel Bi-directional Cognitive Knowledge Framework (BCKF) for reading comprehension from the perspective of complementary learning systems theory.
no code implementations • EMNLP 2020 • Xiangpeng Wei, Heng Yu, Yue Hu, Rongxiang Weng, Luxi Xing, Weihua Luo
As a sequence-to-sequence generation task, neural machine translation (NMT) naturally contains intrinsic uncertainty, where a single sentence in one language has multiple valid counterparts in the other.
no code implementations • ICLR 2021 • Xiangpeng Wei, Rongxiang Weng, Yue Hu, Luxi Xing, Heng Yu, Weihua Luo
Recent studies have demonstrated the overwhelming advantage of cross-lingual pre-trained models (PTMs), such as multilingual BERT and XLM, on cross-lingual NLP tasks.
Contrastive Learning Cross-Lingual Natural Language Inference +4
no code implementations • SEMEVAL 2020 • Luxi Xing, Yuqiang Xie, Yue Hu, Wei Peng
This paper introduces our systems for the first two subtasks of SemEval Task4: Commonsense Validation and Explanation.
no code implementations • CONLL 2019 • Xiangpeng Wei, Yue Hu, Luxi Xing, Li Gao
In this paper, we alleviate the local optimality of back-translation by learning a policy (takes the form of an encoder-decoder and is defined by its parameters) with future rewarding under the reinforcement learning framework, which aims to optimize the global word predictions for unsupervised neural machine translation.