1 code implementation • ACL 2021 • Xuancheng Huang, Jingfang Xu, Maosong Sun, Yang Liu
Although directly finetuning pretrained models on MSG tasks and concatenating multiple sources into a single long sequence is regarded as a simple method to transfer pretrained models to MSG tasks, we conjecture that the direct finetuning method leads to catastrophic forgetting and solely relying on pretrained self-attention layers to capture cross-source information is not sufficient.
1 code implementation • 16 Jan 2021 • Bingning Wang, Ting Yao, WeiPeng Chen, Jingfang Xu, Xiaochuan Wang
In compositional question answering, the systems should assemble several supporting evidence from the document to generate the final answer, which is more difficult than sentence-level or phrase-level QA.
1 code implementation • 14 Jul 2020 • Xuancheng Huang, Jiacheng Zhang, Zhixing Tan, Derek F. Wong, Huanbo Luan, Jingfang Xu, Maosong Sun, Yang Liu
System combination is an important technique for combining the hypotheses of different machine translation systems to improve translation performance.
1 code implementation • 22 Jun 2020 • BingningWang, Ting Yao, Qi Zhang, Jingfang Xu, Xiaochuan Wang
The release of ReCO consists of 300k questions that to our knowledge is the largest in Chinese reading comprehension.
1 code implementation • ACL 2020 • Yilin Niu, Fangkai Jiao, Mantong Zhou, Ting Yao, Jingfang Xu, Minlie Huang
Neural models have achieved great success on machine reading comprehension (MRC), many of which typically consist of two components: an evidence extractor and an answer predictor.
no code implementations • 26 Nov 2019 • Jiacheng Zhang, Huanbo Luan, Maosong Sun, FeiFei Zhai, Jingfang Xu, Yang Liu
The lack of alignment in NMT models leads to three problems: it is hard to (1) interpret the translation process, (2) impose lexical constraints, and (3) impose structural constraints.
2 code implementations • IJCNLP 2019 • Xuancheng Huang, Yang Liu, Huanbo Luan, Jingfang Xu, Maosong Sun
To better identify translation errors, our method learns the representations of source sentences and system outputs in an interactive way.
no code implementations • ACL 2019 • Yining Wang, Long Zhou, Jiajun Zhang, FeiFei Zhai, Jingfang Xu, Cheng-qing Zong
We verify our methods on various translation scenarios, including one-to-many, many-to-many and zero-shot.
1 code implementation • ACL 2017 • Jiacheng Zhang, Yang Liu, Huanbo Luan, Jingfang Xu, Maosong Sun
Although neural machine translation has made significant progress recently, how to integrate multiple overlapping, arbitrary prior knowledge sources remains a challenge.
3 code implementations • EMNLP 2018 • Jiacheng Zhang, Huanbo Luan, Maosong Sun, FeiFei Zhai, Jingfang Xu, Min Zhang, Yang Liu
Although the Transformer translation model (Vaswani et al., 2017) has achieved state-of-the-art performance in a variety of translation tasks, how to use document-level context to deal with discourse phenomena problematic for Transformer still remains a challenge.
no code implementations • EMNLP 2018 • Yining Wang, Jiajun Zhang, FeiFei Zhai, Jingfang Xu, Cheng-qing Zong
However, previous studies show that one-to-many translation based on this framework cannot perform on par with the individually trained models.
2 code implementations • 26th ACM International Conference on Information and Knowledge Management (CIKM '17) 2017 • Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Jingfang Xu, Xue-Qi Cheng
This paper concerns a deep learning approach to relevance ranking in information retrieval (IR).
1 code implementation • 9 Jun 2017 • Qiao Qian, Minlie Huang, Haizhou Zhao, Jingfang Xu, Xiaoyan Zhu
Endowing a chatbot with personality or an identity is quite challenging but critical to deliver more realistic and natural conversations.