no code implementations • 25 Aug 2020 • Lixin Su, Jiafeng Guo, Ruqing Zhang, Yixing Fan, Yanyan Lan, Xue-Qi Cheng
To tackle such a challenge, in this work, we introduce the \textit{Continual Domain Adaptation} (CDA) task for MRC.
1 code implementation • 25 Aug 2020 • Ruqing Zhang, Jiafeng Guo, Yixing Fan, Yanyan Lan, Xue-Qi Cheng
To address this new task, we propose a novel Contrastive Generation model, namely CtrsGen for short, to generate the intent description by contrasting the relevant documents with the irrelevant documents given a query.
no code implementations • 13 Aug 2020 • Changying Hao, Liang Pang, Yanyan Lan, Fei Sun, Jiafeng Guo, Xue-Qi Cheng
To tackle this problem, we propose a Ranking Enhanced Dialogue generation framework in this paper.
no code implementations • 27 Jul 2020 • Bingbing Xu, Jun-Jie Huang, Liang Hou, Hua-Wei Shen, Jinhua Gao, Xue-Qi Cheng
Graph neural networks (GNNs) achieve remarkable success in graph-based semi-supervised node classification, leveraging the information from neighboring nodes to improve the representation learning of target node.
1 code implementation • 27 Jul 2020 • Bingbing Xu, Hua-Wei Shen, Qi Cao, Keting Cen, Xue-Qi Cheng
Graph convolutional networks gain remarkable success in semi-supervised learning on graph structured data.
2 code implementations • 19 Jul 2020 • Shuchang Tao, Hua-Wei Shen, Qi Cao, Liang Hou, Xue-Qi Cheng
Despite achieving strong performance in semi-supervised node classification task, graph neural networks (GNNs) are vulnerable to adversarial attacks, similar to other deep learning models.
no code implementations • ICML 2020 • Jianing Li, Yanyan Lan, Jiafeng Guo, Xue-Qi Cheng
We prove that under certain conditions, a linear combination of quality and diversity constitutes a divergence metric between the generated distribution and the real distribution.
no code implementations • ACL 2020 • Saiping Guan, Xiaolong Jin, Jiafeng Guo, Yuanzhuo Wang, Xue-Qi Cheng
It aims to infer an unknown element in a partial fact consisting of the primary triple coupled with any number of its auxiliary description(s).
no code implementations • 21 Jun 2020 • Zizhen Wang, Yixing Fan, Jiafeng Guo, Liu Yang, Ruqing Zhang, Yanyan Lan, Xue-Qi Cheng, Hui Jiang, Xiaozhao Wang
However, it has long been a challenge to properly measure the similarity between two questions due to the inherent variation of natural language, i. e., there could be different ways to ask a same question or different questions sharing similar expressions.
1 code implementation • 22 May 2020 • Yunchang Zhu, Liang Pang, Yanyan Lan, Xue-Qi Cheng
To fill this gap, we switch to a ranking perspective that sorts the hypotheses in order of their plausibilities.
1 code implementation • CIKM 2019 • Bing-Jie Sun, Hua-Wei Shen, Jinhua Gao, Wentao Ouyang, Xue-Qi Cheng
Latent factor models for community detection aim to find a distributed and generally low-dimensional representation, or coding, that captures the structural regularity of network and reflects the community membership of nodes.
2 code implementations • 12 Dec 2019 • Liang Pang, Jun Xu, Qingyao Ai, Yanyan Lan, Xue-Qi Cheng, Ji-Rong Wen
In learning-to-rank for information retrieval, a ranking model is automatically learned from the data and then utilized to rank the sets of retrieved documents.
no code implementations • IJCNLP 2019 • Haoran Yan, Xiaolong Jin, Xiangbin Meng, Jiafeng Guo, Xue-Qi Cheng
Syntactic relations are broadly used in many NLP tasks.
no code implementations • 4 Sep 2019 • Mahammad Humayoo, Xue-Qi Cheng
The reason stems from the fact that the ordered regularization can reject irrelevant variables and yield an accurate estimation of the parameters.
no code implementations • 9 Jul 2019 • Hainan Zhang, Yanyan Lan, Jiafeng Guo, Jun Xu, Xue-Qi Cheng
Chinese input recommendation plays an important role in alleviating human cost in typing Chinese words, especially in the scenario of mobile applications.
2 code implementations • ACL 2019 • Hainan Zhang, Yanyan Lan, Liang Pang, Jiafeng Guo, Xue-Qi Cheng
Then, the self-attention mechanism is utilized to update both the context and masked response representation.
1 code implementation • 26 Jun 2019 • Junjie Huang, Hua-Wei Shen, Liang Hou, Xue-Qi Cheng
We evaluate the proposed SiGAT method by applying it to the signed link prediction task.
Ranked #1 on Link Sign Prediction on Slashdot
1 code implementation • 21 Jun 2019 • Qi Cao, Hua-Wei Shen, Jinhua Gao, Bingzheng Wei, Xue-Qi Cheng
In this paper, we consider the problem of network-aware popularity prediction, leveraging both early adopters and social networks for popularity prediction.
no code implementations • 20 Jun 2019 • Keting Cen, Hua-Wei Shen, Jinhua Gao, Qi Cao, Bingbing Xu, Xue-Qi Cheng
In this paper, we address attributed network embedding from a novel perspective, i. e., learning node context representation for each node via modeling its attributed local subgraph.
1 code implementation • ACL 2019 • Jinhua Zhu, Fei Gao, Lijun Wu, Yingce Xia, Tao Qin, Wengang Zhou, Xue-Qi Cheng, Tie-Yan Liu
While data augmentation is an important trick to boost the accuracy of deep learning methods in computer vision tasks, its study in natural language tasks is still very limited.
no code implementations • 24 May 2019 • Lixin Su, Jiafeng Guo, Yixing Fan, Yanyan Lan, Xue-Qi Cheng
Web question answering (QA) has become an indispensable component in modern search systems, which can significantly improve users' search experience by providing a direct answer to users' information need.
no code implementations • 24 May 2019 • Ruqing Zhang, Jiafeng Guo, Yixing Fan, Yanyan Lan, Xue-Qi Cheng
To generate a sound outline, an ideal OG model should be able to capture three levels of coherence, namely the coherence between context paragraphs, that between a section and its heading, and that between context headings.
1 code implementation • 24 May 2019 • Jiafeng Guo, Yixing Fan, Xiang Ji, Xue-Qi Cheng
Text matching is the core problem in many natural language processing (NLP) tasks, such as information retrieval, question answering, and conversation.
1 code implementation • ICLR 2019 • Bingbing Xu, Hua-Wei Shen, Qi Cao, Yunqi Qiu, Xue-Qi Cheng
We present graph wavelet neural network (GWNN), a novel graph convolutional neural network (CNN), leveraging graph wavelet transform to address the shortcomings of previous spectral graph CNN methods that depend on graph Fourier transform.
Ranked #51 on Node Classification on Pubmed
no code implementations • 16 Mar 2019 • Jiafeng Guo, Yixing Fan, Liang Pang, Liu Yang, Qingyao Ai, Hamed Zamani, Chen Wu, W. Bruce Croft, Xue-Qi Cheng
Ranking models lie at the heart of research on information retrieval (IR).
no code implementations • 12 Jan 2019 • Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Lixin Su, Xue-Qi Cheng
However, the performances of such models are not so good as that in the RC task.
no code implementations • 30 Oct 2018 • Mahammad Humayoo, Xue-Qi Cheng
One reason for the instability of off-policy learning is a discrepancy between the target ($\pi$) and behavior (b) policy distributions.
1 code implementation • EMNLP 2018 • Shaobo Liu, Rui Cheng, Xiaoming Yu, Xue-Qi Cheng
Meanwhile, dynamic memory network (DMN) has demonstrated promising capability in capturing contextual information and has been applied successfully to various tasks.
no code implementations • ACL 2018 • Ruqing Zhang, Jiafeng Guo, Yixing Fan, Yanyan Lan, Jun Xu, Xue-Qi Cheng
In conversation, a general response (e. g., {``}I don{'}t know{''}) could correspond to a large variety of input utterances.
no code implementations • ACL 2018 • Yue Zhao, Xiaolong Jin, Yuanzhuo Wang, Xue-Qi Cheng
Document-level information is very important for event detection even at sentence level.
no code implementations • ACL 2018 • Hainan Zhang, Yanyan Lan, Jiafeng Guo, Jun Xu, Xue-Qi Cheng
In this paper, we propose two tailored optimization criteria for Seq2Seq to different conversation scenarios, i. e., the maximum generated likelihood for specific-requirement scenario, and the conditional value-at-risk for diverse-requirement scenario.
no code implementations • NAACL 2018 • Fei Gao, Lijun Wu, Li Zhao, Tao Qin, Xue-Qi Cheng, Tie-Yan Liu
Recurrent neural networks have achieved state-of-the-art results in many artificial intelligence tasks, such as language modeling, neural machine translation, speech recognition and so on.
2 code implementations • SIGIR '18 2018 • Yixing Fan, Jiafeng Guo, Yanyan Lan, Jun Xu, ChengXiang Zhai, Xue-Qi Cheng
The local matching layer focuses on producing a set of local relevance signals by modeling the semantic matching between a query and each passage of a document.
1 code implementation • 29 Apr 2018 • Yadi Lao, Jun Xu, Yanyan Lan, Jiafeng Guo, Sheng Gao, Xue-Qi Cheng
Inspired by the success and methodology of the AlphaGo Zero, MM-Tag formalizes the problem of sequence tagging with a Monte Carlo tree search (MCTS) enhanced Markov decision process (MDP) model, in which the time steps correspond to the positions of words in a sentence from left to right, and each action corresponds to assign a tag to a word.
no code implementations • 22 Apr 2018 • Guoxin Cui, Jun Xu, Wei Zeng, Yanyan Lan, Jiafeng Guo, Xue-Qi Cheng
One of the most significant bottleneck in training large scale machine learning models on parameter server (PS) is the communication overhead, because it needs to frequently exchange the model gradients between the workers and servers during the training iterations.
1 code implementation • 22 Nov 2017 • Liang Pang, Yanyan Lan, Jun Xu, Jiafeng Guo, Xue-Qi Cheng
The main idea is to represent the weight matrix of the locally connected layer as the product of the kernel and the smoother, where the kernel is shared over different local receptive fields, and the smoother is for determining the importance and relations of different local receptive fields.
no code implementations • 29 Oct 2017 • Denghui Zhang, Pengshan Cai, Yantao Jia, Manling Li, Yuanzhuo Wang, Xue-Qi Cheng
Fine-grained entity typing aims to assign entity mentions in the free text with types arranged in a hierarchical structure.
2 code implementations • 26th ACM International Conference on Information and Knowledge Management (CIKM '17) 2017 • Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Jingfang Xu, Xue-Qi Cheng
This paper concerns a deep learning approach to relevance ranking in information retrieval (IR).
no code implementations • 24 Jul 2017 • Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Xue-Qi Cheng
Therefore, it is necessary to identify the difference between automatically learned features by deep IR models and hand-crafted features used in traditional learning to rank approaches.
1 code implementation • 23 Jul 2017 • Yixing Fan, Liang Pang, Jianpeng Hou, Jiafeng Guo, Yanyan Lan, Xue-Qi Cheng
In recent years, deep neural models have been widely adopted for text matching tasks, such as question answering and information retrieval, showing improved performance as compared with previous methods.
no code implementations • 18 Jul 2017 • Ruqing Zhang, Jiafeng Guo, Yanyan Lan, Jun Xu, Xue-Qi Cheng
Representing texts as fixed-length vectors is central to many language processing tasks.
1 code implementation • 30 Mar 2017 • Denghui Zhang, Manling Li, Yantao Jia, Yuanzhuo Wang, Xue-Qi Cheng
Knowledge graph embedding aims to embed entities and relations of knowledge graphs into low-dimensional vector spaces.
Ranked #1 on Link Prediction on WN18 (filtered)
no code implementations • 14 Jan 2017 • Yongqing Wang, Shenghua Liu, Hua-Wei Shen, Xue-Qi Cheng
Indeed, in marked temporal dynamics, the time and the mark of the next event are highly dependent on each other, requiring a method that could simultaneously predict both of them.
1 code implementation • 15 Jun 2016 • Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Xue-Qi Cheng
Although ad-hoc retrieval can also be formalized as a text matching task, few deep models have been tested on it.
1 code implementation • 15 Apr 2016 • Shengxian Wan, Yanyan Lan, Jun Xu, Jiafeng Guo, Liang Pang, Xue-Qi Cheng
In this paper, we propose to view the generation of the global interaction between two texts as a recursive process: i. e. the interaction of two texts at each position is a composition of the interactions between their prefixes as well as the word level interaction at the current position.
no code implementations • 24 Mar 2016 • Fei Sun, Jiafeng Guo, Yanyan Lan, Jun Xu, Xue-Qi Cheng
Recent work exhibited that distributed word representations are good at capturing linguistic regularities in language.
7 code implementations • 20 Feb 2016 • Liang Pang, Yanyan Lan, Jiafeng Guo, Jun Xu, Shengxian Wan, Xue-Qi Cheng
An effective way is to extract meaningful matching patterns from words, phrases, and sentences to produce the matching score.
no code implementations • 4 Dec 2015 • Yantao Jia, Yuanzhuo Wang, Hailun Lin, Xiaolong Jin, Xue-Qi Cheng
Knowledge graph embedding aims to represent entities and relations in a large-scale knowledge graph as elements in a continuous vector space.
1 code implementation • 26 Nov 2015 • Shengxian Wan, Yanyan Lan, Jiafeng Guo, Jun Xu, Liang Pang, Xue-Qi Cheng
Our model has several advantages: (1) By using Bi-LSTM, rich context of the whole sentence is leveraged to capture the contextualized local information in each positional sentence representation; (2) By matching with multiple positional sentence representations, it is flexible to aggregate different important contextualized local information in a sentence to support the matching; (3) Experiments on different tasks such as question answering and sentence completion demonstrate the superiority of our model.
no code implementations • 17 Feb 2014 • Suqi Cheng, Hua-Wei Shen, Junming Huang, Wei Chen, Xue-Qi Cheng
Early methods mainly fall into two paradigms with certain benefits and drawbacks: (1)Greedy algorithms, selecting seed nodes one by one, give a guaranteed accuracy relying on the accurate approximation of influence spread with high computational cost; (2)Heuristic algorithms, estimating influence spread using efficient heuristics, have low computational cost but unstable accuracy.
Social and Information Networks Data Structures and Algorithms F.2.2; D.2.8
no code implementations • 26 Sep 2013 • Shuzi Niu, Yanyan Lan, Jiafeng Guo, Xue-Qi Cheng
Traditional rank aggregation methods are deterministic, and can be categorized into explicit and implicit methods depending on whether rank information is explicitly or implicitly utilized.
no code implementations • 19 Dec 2012 • Suqi Cheng, Hua-Wei Shen, Junming Huang, Guoqing Zhang, Xue-Qi Cheng
We point out that the essential reason of the dilemma is the surprising fact that the submodularity, a key requirement of the objective function for a greedy algorithm to approximate the optimum, is not guaranteed in all conventional greedy algorithms in the literature of influence maximization.
Social and Information Networks Data Structures and Algorithms Physics and Society F.2.2; D.2.8