1 code implementation • 25 Apr 2022 • Yunchang Zhu, Liang Pang, Yanyan Lan, HuaWei Shen, Xueqi Cheng
Ideally, if a PRF model can distinguish between irrelevant and relevant information in the feedback, the more feedback documents there are, the better the revised query will be.
no code implementations • 6 Apr 2022 • Shicheng Xu, Liang Pang, HuaWei Shen, Xueqi Cheng
It is because the end-to-end supervised learning on task-specific dataset makes model overemphasize the data sample bias and task-specific signals instead of the essential matching signals, which ruins the generalization of model to different tasks.
no code implementations • NeurIPS 2021 • Ruibin Xiong, Yimeng Chen, Liang Pang, Xueqi Chen, Yanyan Lan
Ensemble-based debiasing methods have been shown effective in mitigating the reliance of classifiers on specific dataset bias, by exploiting the output of a bias-only model to adjust the learning target.
1 code implementation • EMNLP 2021 • Fei Xiao, Liang Pang, Yanyan Lan, Yan Wang, HuaWei Shen, Xueqi Cheng
The proposed transductive learning approach is general and effective to the task of unsupervised style transfer, and we will apply it to the other two typical methods in the future.
1 code implementation • EMNLP 2021 • Yunchang Zhu, Liang Pang, Yanyan Lan, HuaWei Shen, Xueqi Cheng
Information seeking is an essential step for open-domain question answering to efficiently gather evidence from a large corpus.
Ranked #2 on
Question Answering
on HotpotQA
no code implementations • 16 Aug 2021 • Lijuan Chen, Yanyan Lan, Liang Pang, Jiafeng Guo, Xueqi Cheng
We further extend these constraints to the semantic settings, which are shown to be better satisfied for all the deep text matching models.
no code implementations • 12 Aug 2021 • Lin Bo, Liang Pang, Gang Wang, Jun Xu, Xiuqiang He, Ji-Rong Wen
Experimental results base on three publicly available benchmarks showed that in both of the implementations, Pre-Rank can respectively outperform the underlying ranking models and achieved state-of-the-art performances.
1 code implementation • 2 Apr 2021 • Changying Hao, Liang Pang, Yanyan Lan, Yan Wang, Jiafeng Guo, Xueqi Cheng
In the sketch stage, a skeleton is extracted by removing words which are conflict to the counterfactual condition, from the original ending.
1 code implementation • 16 Jan 2021 • Liang Pang, Yanyan Lan, Xueqi Cheng
However, these models designed for short texts cannot well address the long-form text matching problem, because there are many contexts in long-form texts can not be directly aligned with each other, and it is difficult for existing models to capture the key matching signals from such noisy data.
1 code implementation • COLING 2020 • Bin Jiang, Wanyue Zhou, Jingxu Yang, Chao Yang, Shihan Wang, Liang Pang
However, generating personalized responses is still a challenging task since the leverage of predefined persona information is often insufficient.
no code implementations • COLING 2020 • Bin Jiang, Jing Hou, Wanyue Zhou, Chao Yang, Shihan Wang, Liang Pang
Aspect-based sentiment analysis (ABSA) aims to determine the sentiment polarity of each specific aspect in a given sentence.
1 code implementation • CVPR 2021 • Xin Hong, Yanyan Lan, Liang Pang, Jiafeng Guo, Xueqi Cheng
Following this definition, a new dataset namely TRANCE is constructed on the basis of CLEVR, including three levels of settings, i. e.~Basic (single-step transformation), Event (multi-step transformation), and View (multi-step transformation with variant views).
1 code implementation • EMNLP 2020 • Weijie Yu, Chen Xu, Jun Xu, Liang Pang, Xiaopeng Gao, Xiaozhao Wang, Ji-Rong Wen
Four popular text matching methods have been exploited in the paper.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Wanqing Cui, Yanyan Lan, Liang Pang, Jiafeng Guo, Xueqi Cheng
This paper proposes a novel approach to learn commonsense from images, instead of limited raw texts or costly constructed knowledge bases, for the commonsense reasoning problem in NLP.
no code implementations • 27 Sep 2020 • Hainan Zhang, Yanyan Lan, Liang Pang, Hongshen Chen, Zhuoye Ding, Dawei Yin
Therefore, an ideal dialogue generation models should be able to capture the topic information of each context, detect the relevant context, and produce appropriate responses accordingly.
no code implementations • 13 Aug 2020 • Changying Hao, Liang Pang, Yanyan Lan, Fei Sun, Jiafeng Guo, Xue-Qi Cheng
To tackle this problem, we propose a Ranking Enhanced Dialogue generation framework in this paper.
no code implementations • 1 Jun 2020 • Linfang Hou, Liang Pang, Xin Hong, Yanyan Lan, Zhi-Ming Ma, Dawei Yin
Robust Reinforcement Learning aims to find the optimal policy with some extent of robustness to environmental dynamics.
1 code implementation • 22 May 2020 • Yunchang Zhu, Liang Pang, Yanyan Lan, Xue-Qi Cheng
To fill this gap, we switch to a ranking perspective that sorts the hypotheses in order of their plausibilities.
2 code implementations • 12 Dec 2019 • Liang Pang, Jun Xu, Qingyao Ai, Yanyan Lan, Xue-Qi Cheng, Ji-Rong Wen
In learning-to-rank for information retrieval, a ranking model is automatically learned from the data and then utilized to rank the sets of retrieved documents.
2 code implementations • ACL 2019 • Hainan Zhang, Yanyan Lan, Liang Pang, Jiafeng Guo, Xue-Qi Cheng
Then, the self-attention mechanism is utilized to update both the context and masked response representation.
no code implementations • 16 Mar 2019 • Jiafeng Guo, Yixing Fan, Liang Pang, Liu Yang, Qingyao Ai, Hamed Zamani, Chen Wu, W. Bruce Croft, Xue-Qi Cheng
Ranking models lie at the heart of research on information retrieval (IR).
no code implementations • 12 Jan 2019 • Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Lixin Su, Xue-Qi Cheng
However, the performances of such models are not so good as that in the RC task.
no code implementations • 18 Dec 2018 • Peng Peng, Liang Pang, Yufeng Yuan, Chao GAO
We show in the experiments that Pommerman is a perfect environment for studying continual learning, and the agent can improve its performance by continually learning new skills without forgetting the old ones.
1 code implementation • 22 Nov 2017 • Liang Pang, Yanyan Lan, Jun Xu, Jiafeng Guo, Xue-Qi Cheng
The main idea is to represent the weight matrix of the locally connected layer as the product of the kernel and the smoother, where the kernel is shared over different local receptive fields, and the smoother is for determining the importance and relations of different local receptive fields.
2 code implementations • 26th ACM International Conference on Information and Knowledge Management (CIKM '17) 2017 • Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Jingfang Xu, Xue-Qi Cheng
This paper concerns a deep learning approach to relevance ranking in information retrieval (IR).
no code implementations • 24 Jul 2017 • Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Xue-Qi Cheng
Therefore, it is necessary to identify the difference between automatically learned features by deep IR models and hand-crafted features used in traditional learning to rank approaches.
1 code implementation • 23 Jul 2017 • Yixing Fan, Liang Pang, Jianpeng Hou, Jiafeng Guo, Yanyan Lan, Xue-Qi Cheng
In recent years, deep neural models have been widely adopted for text matching tasks, such as question answering and information retrieval, showing improved performance as compared with previous methods.
1 code implementation • 15 Jun 2016 • Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Xue-Qi Cheng
Although ad-hoc retrieval can also be formalized as a text matching task, few deep models have been tested on it.
1 code implementation • 15 Apr 2016 • Shengxian Wan, Yanyan Lan, Jun Xu, Jiafeng Guo, Liang Pang, Xue-Qi Cheng
In this paper, we propose to view the generation of the global interaction between two texts as a recursive process: i. e. the interaction of two texts at each position is a composition of the interactions between their prefixes as well as the word level interaction at the current position.
7 code implementations • 20 Feb 2016 • Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Shengxian Wan, Xue-Qi Cheng
An effective way is to extract meaningful matching patterns from words, phrases, and sentences to produce the matching score.
1 code implementation • 26 Nov 2015 • Shengxian Wan, Yanyan Lan, Jiafeng Guo, Jun Xu, Liang Pang, Xue-Qi Cheng
Our model has several advantages: (1) By using Bi-LSTM, rich context of the whole sentence is leveraged to capture the contextualized local information in each positional sentence representation; (2) By matching with multiple positional sentence representations, it is flexible to aggregate different important contextualized local information in a sentence to support the matching; (3) Experiments on different tasks such as question answering and sentence completion demonstrate the superiority of our model.
no code implementations • 27 Aug 2014 • Yuyu Zhang, Liang Pang, Lei Shi, Bin Wang
This paper describes the solution of Bazinga Team for Tmall Recommendation Prize 2014.
no code implementations • 29 Nov 2013 • Xudong Liu, Bing Xu, Yuyu Zhang, Qiang Yan, Liang Pang, Qiang Li, Hanxiao Sun, Bin Wang
The ICDM Challenge 2013 is to apply machine learning to the problem of hotel ranking, aiming to maximize purchases according to given hotel characteristics, location attractiveness of hotels, user's aggregated purchase history and competitive online travel agency information for each potential hotel choice.