no code implementations • COLING 2022 • Kun Zhang, Yunqi Qiu, Yuanzhuo Wang, Long Bai, Wei Li, Xuhui Jiang, HuaWei Shen, Xueqi Cheng
Complex question generation over knowledge bases (KB) aims to generate natural language questions involving multiple KB relations or functional constraints.
no code implementations • COLING 2022 • Kailin Zhao, Xiaolong Jin, Saiping Guan, Jiafeng Guo, Xueqi Cheng
For the meta learner, it requires a good generalization ability so as to quickly adapt to new tasks.
no code implementations • 24 Nov 2023 • Yige Yuan, Bingbing Xu, Liang Hou, Fei Sun, HuaWei Shen, Xueqi Cheng
To address this, we propose a novel energy-based perspective, enhancing the model's perception of target data distributions without requiring access to training data or processes.
no code implementations • 23 Nov 2023 • Shicheng Xu, Danyang Hou, Liang Pang, Jingcheng Deng, Jun Xu, HuaWei Shen, Xueqi Cheng
This invisible relevance bias is prevalent across retrieval models with varying training data and architectures.
no code implementations • 13 Nov 2023 • Junkai Zhou, Liang Pang, HuaWei Shen, Xueqi Cheng
The emergence of large language models (LLMs) further improves the capabilities of open-domain dialogue systems and can generate fluent, coherent, and diverse responses.
no code implementations • 6 Nov 2023 • Yinqiong Cai, Yixing Fan, Keping Bi, Jiafeng Guo, Wei Chen, Ruqing Zhang, Xueqi Cheng
The first-stage retrieval aims to retrieve a subset of candidate documents from a huge collection both effectively and efficiently.
no code implementations • 6 Nov 2023 • Yucan Guo, Zixuan Li, Xiaolong Jin, Yantao Liu, Yutao Zeng, Wenxuan Liu, Xiang Li, Pan Yang, Long Bai, Jiafeng Guo, Xueqi Cheng
Therefore, in this paper, we propose a universal retrieval-augmented code generation framework based on LLMs, called Code4UIE, for IE tasks.
no code implementations • 3 Nov 2023 • Shicheng Xu, Liang Pang, Jiangnan Li, Mo Yu, Fandong Meng, HuaWei Shen, Xueqi Cheng, Jie zhou
Readers usually only give an abstract and vague description as the query based on their own understanding, summaries, or speculations of the plot, which requires the retrieval model to have a strong ability to estimate the abstract semantic associations between the query and candidate plots.
no code implementations • 22 Oct 2023 • Yantao Liu, Zixuan Li, Xiaolong Jin, Long Bai, Saiping Guan, Jiafeng Guo, Xueqi Cheng
As a kind of common method for this task, semantic parsing-based ones first convert natural language questions to logical forms (e. g., SPARQL queries) and then execute them on knowledge bases to get answers.
1 code implementation • 18 Oct 2023 • Hengran Zhang, Ruqing Zhang, Jiafeng Guo, Maarten de Rijke, Yixing Fan, Xueqi Cheng
We argue that, rather than relevance, for FV we need to focus on the utility that a claim verifier derives from the retrieved evidence.
no code implementations • 18 Oct 2023 • Lulu Yu, Keping Bi, Jiafeng Guo, Xueqi Cheng
The Chinese academy of sciences Information Retrieval team (CIR) has participated in the NTCIR-17 ULTRE-2 task.
no code implementations • 16 Oct 2023 • Jingcheng Deng, Liang Pang, HuaWei Shen, Xueqi Cheng
It encodes the text corpus into a latent space, capturing current and future information from both source and target text.
1 code implementation • 14 Oct 2023 • Guoxin Chen, Yongqing Wang, Fangda Guo, Qinglang Guo, Jiangli Shao, HuaWei Shen, Xueqi Cheng
Most existing methods that address out-of-distribution (OOD) generalization for node classification on graphs primarily focus on a specific type of data biases, such as label selection bias or structural bias.
3 code implementations • 9 Oct 2023 • Zezhi Shao, Fei Wang, Yongjun Xu, Wei Wei, Chengqing Yu, Zhao Zhang, Di Yao, Guangyin Jin, Xin Cao, Gao Cong, Christian S. Jensen, Xueqi Cheng
Moreover, based on the proposed BasicTS and rich heterogeneous MTS datasets, we conduct an exhaustive and reproducible performance and efficiency comparison of popular models, providing insights for researchers in selecting and designing MTS forecasting models.
1 code implementation • 6 Oct 2023 • Yu Wang, Tong Zhao, Yuying Zhao, Yunchao Liu, Xueqi Cheng, Neil Shah, Tyler Derr
Despite the widespread belief that low-degree nodes exhibit poorer LP performance, our empirical findings provide nuances to this viewpoint and prompt us to propose a better metric, Topological Concentration (TC), based on the intersection of the local subgraph of each node with the ones of its neighbors.
1 code implementation • 1 Oct 2023 • Shiyu Ni, Keping Bi, Jiafeng Guo, Xueqi Cheng
In this paper, we aim to conduct a systematic comparative study of various types of training objectives, with different properties of not only whether it is permutation-invariant but also whether it conducts sequential prediction and whether it can control the count of output facets.
no code implementations • 22 Sep 2023 • Zhilei Hu, Zixuan Li, Daozhu Xu, Long Bai, Cheng Jin, Xiaolong Jin, Jiafeng Guo, Xueqi Cheng
To comprehensively understand their intrinsic semantics, in this paper, we obtain prototype representations for each type of event relation and propose a Prototype-Enhanced Matching (ProtoEM) framework for the joint extraction of multiple kinds of event relations.
no code implementations • 22 Sep 2023 • Weicheng Ren, Zixuan Li, Xiaolong Jin, Long Bai, Miao Su, Yantao Liu, Saiping Guan, Jiafeng Guo, Xueqi Cheng
Specifically, PerNee first recognizes the triggers of both inner and outer events and further recognizes the PEs via classifying the relation type between trigger pairs.
no code implementations • 5 Sep 2023 • Kaike Zhang, Qi Cao, Fei Sun, Yunfan Wu, Shuchang Tao, HuaWei Shen, Xueqi Cheng
With the rapid growth of information, recommender systems have become integral for providing personalized suggestions and overcoming information overload.
1 code implementation • 31 Aug 2023 • Yi Zhang, Yuying Zhao, Zhaoqing Li, Xueqi Cheng, Yu Wang, Olivera Kotevska, Philip S. Yu, Tyler Derr
Despite this progress, there is a lack of a comprehensive overview of the attacks and the techniques for preserving privacy in the graph domain.
no code implementations • 29 Aug 2023 • Jiangui Chen, Ruqing Zhang, Jiafeng Guo, Maarten de Rijke, Wei Chen, Yixing Fan, Xueqi Cheng
We put forward a novel Continual-LEarner for generatiVE Retrieval (CLEVER) model and make two major contributions to continual learning for GR: (i) To encode new documents into docids with low computational cost, we present Incremental Product Quantization, which updates a partial quantization codebook according to two adaptive thresholds; and (ii) To memorize new documents for querying without forgetting previous knowledge, we propose a memory-augmented learning mechanism, to form meaningful connections between old and new documents.
1 code implementation • 24 Aug 2023 • Lu Chen, Ruqing Zhang, Wei Huang, Wei Chen, Jiafeng Guo, Xueqi Cheng
The key idea is to reformulate the Variational Auto-encoder (VAE) to fit the joint distribution of the document and summary variables from the training corpus.
1 code implementation • 22 Aug 2023 • Yinqiong Cai, Keping Bi, Yixing Fan, Jiafeng Guo, Wei Chen, Xueqi Cheng
First-stage retrieval is a critical task that aims to retrieve relevant document candidates from a large-scale collection.
no code implementations • 19 Aug 2023 • Yu-An Liu, Ruqing Zhang, Jiafeng Guo, Maarten de Rijke, Wei Chen, Yixing Fan, Xueqi Cheng
The AREA task is meant to trick DR models into retrieving a target document that is outside the initial set of candidate documents retrieved by the DR model in response to a query.
no code implementations • 18 Aug 2023 • Wendong Bi, Xueqi Cheng, Bingbing Xu, Xiaoqian Sun, Li Xu, HuaWei Shen
Transfer learning has been a feasible way to transfer knowledge from high-quality external data of source domains to limited data of target domains, which follows a domain-level knowledge transfer to learn a shared posterior distribution.
1 code implementation • 21 Jul 2023 • Boshen Shi, Yongqing Wang, Fangda Guo, Jiangli Shao, HuaWei Shen, Xueqi Cheng
Overall, OpenGDA provides a user-friendly, scalable and reproducible benchmark for evaluating graph domain adaptation models.
no code implementations • 10 Jul 2023 • Yuying Zhao, Yu Wang, Yunchao Liu, Xueqi Cheng, Charu Aggarwal, Tyler Derr
Additionally, motivated by the concepts of user-level and item-level fairness, we broaden the understanding of diversity to encompass not only the item level but also the user level.
no code implementations • 22 Jun 2023 • Yu-An Liu, Ruqing Zhang, Jiafeng Guo, Wei Chen, Xueqi Cheng
Recently, we have witnessed generative retrieval increasingly gaining attention in the information retrieval (IR) field, which retrieves documents by directly generating their identifiers.
no code implementations • 25 May 2023 • Shuchang Tao, Qi Cao, HuaWei Shen, Yunfan Wu, Bingbing Xu, Xueqi Cheng
Through modeling and analyzing the causal relationships in graph adversarial attacks, we design two invariance objectives to learn the causal features.
no code implementations • 25 May 2023 • Yige Yuan, Bingbing Xu, Bo Lin, Liang Hou, Fei Sun, HuaWei Shen, Xueqi Cheng
The generalization of neural networks is a central challenge in machine learning, especially concerning the performance under distributions that differ from training ones.
1 code implementation • 24 May 2023 • Kangxi Wu, Liang Pang, HuaWei Shen, Xueqi Cheng, Tat-Seng Chua
By jointly analyzing the proxy perplexities of LLMs, we can determine the source of the generated text.
no code implementations • 24 May 2023 • Yubao Tang, Ruqing Zhang, Jiafeng Guo, Jiangui Chen, Zuowei Zhu, Shuaiqiang Wang, Dawei Yin, Xueqi Cheng
Specifically, we assign each document an Elaborative Description based on the query generation technique, which is more meaningful than a string of integers in the original DSI; and (2) For the associations between a document and its identifier, we take inspiration from Rehearsal Strategies in human learning.
1 code implementation • 22 May 2023 • Hanxing Ding, Liang Pang, Zihao Wei, HuaWei Shen, Xueqi Cheng, Tat-Seng Chua
Multi-aspect controllable text generation aims to generate fluent sentences that possess multiple desired attributes simultaneously.
no code implementations • 22 May 2023 • Zhilei Hu, Zixuan Li, Xiaolong Jin, Long Bai, Saiping Guan, Jiafeng Guo, Xueqi Cheng
This is a very challenging task, because causal relations are usually expressed by implicit associations between events.
no code implementations • 18 May 2023 • Shicheng Xu, Liang Pang, HuaWei Shen, Xueqi Cheng
Dense retrieval has shown promise in the first-stage retrieval process when trained on in-domain labeled datasets.
1 code implementation • 18 May 2023 • Junkai Zhou, Liang Pang, HuaWei Shen, Xueqi Cheng
Language models trained on large-scale corpora can generate remarkably fluent results in open-domain dialogue.
no code implementations • 10 May 2023 • Jiyao Wei, Saiping Guan, Xiaolong Jin, Jiafeng Guo, Xueqi Cheng
Link prediction on n-ary facts is to predict a missing element in an n-ary fact.
1 code implementation • 9 May 2023 • YuanHao Liu, Qi Cao, HuaWei Shen, Yunfan Wu, Shuchang Tao, Xueqi Cheng
In this paper, we propose a new criterion for popularity debiasing, i. e., in an unbiased recommender system, both popular and unpopular items should receive Interactions Proportional to the number of users who Like it, namely IPL criterion.
no code implementations • 3 May 2023 • Xin Hong, Yanyan Lan, Liang Pang, Jiafeng Guo, Xueqi Cheng
In this paper, we propose a new visual reasoning task, called Visual Transformation Telling (VTT).
1 code implementation • 2 May 2023 • Xin Hong, Yanyan Lan, Liang Pang, Jiafeng Guo, Xueqi Cheng
Such \textbf{state driven} visual reasoning has limitations in reflecting the ability to infer the dynamics between different states, which has shown to be equally important for human cognition in Piaget's theory.
1 code implementation • 28 Apr 2023 • Shicheng Xu, Liang Pang, HuaWei Shen, Xueqi Cheng, Tat-Seng Chua
Second, IR verifies the answer of each node of CoQ, it corrects the answer that is not consistent with the retrieved information when IR gives high confidence, which improves the credibility.
1 code implementation • 28 Apr 2023 • Jiangui Chen, Ruqing Zhang, Jiafeng Guo, Maarten de Rijke, Yiqun Liu, Yixing Fan, Xueqi Cheng
Learning task-specific retrievers that return relevant contexts at an appropriate level of semantic granularity, such as a document retriever, passage retriever, sentence retriever, and entity retriever, may help to achieve better performance on the end-to-end task.
1 code implementation • 28 Apr 2023 • Yu-An Liu, Ruqing Zhang, Jiafeng Guo, Maarten de Rijke, Wei Chen, Yixing Fan, Xueqi Cheng
In this paper, we focus on a more general type of perturbation and introduce the topic-oriented adversarial ranking attack task against NRMs, which aims to find an imperceptible perturbation that can promote a target document in ranking for a group of queries with the same topic.
1 code implementation • 16 Feb 2023 • Shuchang Tao, HuaWei Shen, Qi Cao, Yunfan Wu, Liang Hou, Xueqi Cheng
In this paper, we propose and formulate graph adversarial immunization, i. e., vaccinating part of graph structure to improve certifiable robustness of graph against any admissible adversarial attack.
1 code implementation • 5 Feb 2023 • JunJie Huang, Qi Cao, Ruobing Xie, Shaoliang Zhang, Feng Xia, HuaWei Shen, Xueqi Cheng
To reduce the influence of data sparsity, Graph Contrastive Learning (GCL) is adopted in GNN-based CF methods for enhancing performance.
1 code implementation • 2 Feb 2023 • Wendong Bi, Bingbing Xu, Xiaoqian Sun, Li Xu, HuaWei Shen, Xueqi Cheng
To combat the above challenges, we propose Knowledge Transferable Graph Neural Network (KT-GNN), which models distribution shifts during message passing and representation learning by transferring knowledge from vocal nodes to silent nodes.
1 code implementation • 31 Jan 2023 • Wendong Bi, Bingbing Xu, Xiaoqian Sun, Zidong Wang, HuaWei Shen, Xueqi Cheng
However, most nodes in the tribe-style graph lack attributes, making it difficult to directly adopt existing graph learning methods (e. g., Graph Neural Networks(GNNs)).
no code implementations • 29 Jan 2023 • Danyang Hou, Liang Pang, Yanyan Lan, HuaWei Shen, Xueqi Cheng
In this paper, we focus on improving two problems of two-stage method: (1) Moment prediction bias: The predicted moments for most queries come from the top retrieved videos, ignoring the possibility that the target moment is in the bottom retrieved videos, which is caused by the inconsistency of Shared Normalization during training and inference.
no code implementations • 10 Jan 2023 • Yunchang Zhu, Liang Pang, Kangxi Wu, Yanyan Lan, HuaWei Shen, Xueqi Cheng
Comparative loss is essentially a ranking loss on top of the task-specific losses of the full and ablated models, with the expectation that the task-specific loss of the full model is minimal.
1 code implementation • 16 Dec 2022 • Long Bai, Saiping Guan, Zixuan Li, Jiafeng Guo, Xiaolong Jin, Xueqi Cheng
Fundamentally, it is based on the proposed rich event description, which enriches the existing ones with three kinds of important information, namely, the senses of verbs, extra semantic roles, and types of participants.
no code implementations • 1 Dec 2022 • Shicheng Xu, Liang Pang, HuaWei Shen, Xueqi Cheng
Different needs correspond to different IR tasks such as document retrieval, open-domain question answering, retrieval-based dialogue, etc., while they share the same schema to estimate the relationship between texts.
no code implementations • 20 Nov 2022 • Yige Yuan, Bingbing Xu, HuaWei Shen, Qi Cao, Keting Cen, Wen Zheng, Xueqi Cheng
Guided by the bound, we design a GCL framework named InfoAdv with enhanced generalization ability, which jointly optimizes the generalization metric and InfoMax to strike the right balance between pretext task fitting and the generalization ability on downstream tasks.
1 code implementation • 9 Nov 2022 • Wenxiang Sun, Yixing Fan, Jiafeng Guo, Ruqing Zhang, Xueqi Cheng
Since each entity often contains rich visual and textual information in KBs, we thus propose three different sub-tasks, i. e., visual to visual entity linking (V2VEL), visual to textual entity linking (V2TEL), and visual to visual-textual entity linking (V2VTEL).
no code implementations • 28 Oct 2022 • Sihao Yu, Fei Sun, Jiafeng Guo, Ruqing Zhang, Xueqi Cheng
However, such a strategy typically leads to a loss in model performance, which poses the challenge that increasing the unlearning efficiency while maintaining acceptable performance.
1 code implementation • 19 Oct 2022 • Kaike Zhang, Qi Cao, Gaolin Fang, Bingbing Xu, Hongjian Zou, HuaWei Shen, Xueqi Cheng
Unsupervised representation learning for dynamic graphs has attracted a lot of research attention in recent years.
no code implementations • 18 Oct 2022 • Zixuan Li, Zhongni Hou, Saiping Guan, Xiaolong Jin, Weihua Peng, Long Bai, Yajuan Lyu, Wei Li, Jiafeng Guo, Xueqi Cheng
This is actually a matching task between a query and candidate entities based on their historical structures, which reflect behavioral trends of the entities at different timestamps.
1 code implementation • 14 Sep 2022 • Chen Wu, Ruqing Zhang, Jiafeng Guo, Wei Chen, Yixing Fan, Maarten de Rijke, Xueqi Cheng
A ranking model is said to be Certified Top-$K$ Robust on a ranked list when it is guaranteed to keep documents that are out of the top $K$ away from the top $K$ under any attack.
no code implementations • 12 Sep 2022 • Yinqiong Cai, Jiafeng Guo, Yixing Fan, Qingyao Ai, Ruqing Zhang, Xueqi Cheng
When sampling top-ranked results (excluding the labeled positives) as negatives from a stronger retriever, the performance of the learned NRM becomes even worse.
no code implementations • 21 Aug 2022 • Xinyu Ma, Ruqing Zhang, Jiafeng Guo, Yixing Fan, Xueqi Cheng
Empirical results show that our method can significantly outperform the state-of-the-art autoencoder-based language models and other pre-trained models for dense retrieval.
no code implementations • 21 Aug 2022 • Xinyu Ma, Jiafeng Guo, Ruqing Zhang, Yixing Fan, Xueqi Cheng
Unlike the promising results in NLP, we find that these methods cannot achieve comparable performance to full fine-tuning at both stages when updating less than 1\% of the original model parameters.
1 code implementation • 16 Aug 2022 • Jiangui Chen, Ruqing Zhang, Jiafeng Guo, Yiqun Liu, Yixing Fan, Xueqi Cheng
We show that a strong generative retrieval model can be learned with a set of adequately designed pre-training tasks, and be adopted to improve a variety of downstream KILT tasks with further fine-tuning.
1 code implementation • 3 Aug 2022 • Shuchang Tao, Qi Cao, HuaWei Shen, Yunfan Wu, Liang Hou, Fei Sun, Xueqi Cheng
In this paper, we first propose and define camouflage as distribution similarity between ego networks of injected nodes and normal nodes.
no code implementations • 4 Jul 2022 • Houquan Zhou, Shenghua Liu, Danai Koutra, HuaWei Shen, Xueqi Cheng
Recent works try to improve scalability via graph summarization -- i. e., they learn embeddings on a smaller summary graph, and then restore the node embeddings of the original graph.
no code implementations • 27 Jun 2022 • Yan Jiang, Jinhua Gao, HuaWei Shen, Xueqi Cheng
The main challenge of this task comes two-fold: few-shot learning resulting from the varying targets and the lack of contextual information of the targets.
1 code implementation • 31 May 2022 • Liang Hou, Qi Cao, Yige Yuan, Songtao Zhao, Chongyang Ma, Siyuan Pan, Pengfei Wan, Zhongyuan Wang, HuaWei Shen, Xueqi Cheng
Training generative adversarial networks (GANs) with limited data is challenging because the discriminator is prone to overfitting.
1 code implementation • 25 Apr 2022 • Yunchang Zhu, Liang Pang, Yanyan Lan, HuaWei Shen, Xueqi Cheng
Ideally, if a PRF model can distinguish between irrelevant and relevant information in the feedback, the more feedback documents there are, the better the revised query will be.
1 code implementation • 22 Apr 2022 • Xinyu Ma, Jiafeng Guo, Ruqing Zhang, Yixing Fan, Xueqi Cheng
% Therefore, in this work, we propose to drop out the decoder and introduce a novel contrastive span prediction task to pre-train the encoder alone.
no code implementations • 18 Apr 2022 • Quan Ding, Shenghua Liu, Bin Zhou, HuaWei Shen, Xueqi Cheng
Given a multivariate big time series, can we detect anomalies as soon as they occur?
1 code implementation • 12 Apr 2022 • Jiangui Chen, Ruqing Zhang, Jiafeng Guo, Yixing Fan, Xueqi Cheng
This classical approach has clear drawbacks as follows: i) a large document index as well as a complicated search process is required, leading to considerable memory and computational overhead; ii) independent scoring paradigms fail to capture the interactions among documents and sentences in ranking; iii) a fixed number of sentences are selected to form the final evidence set.
1 code implementation • 6 Apr 2022 • Shicheng Xu, Liang Pang, HuaWei Shen, Xueqi Cheng
In generalization stage, matching model explores the essential matching signals by being trained on diverse matching tasks.
no code implementations • 4 Apr 2022 • Chen Wu, Ruqing Zhang, Jiafeng Guo, Maarten de Rijke, Yixing Fan, Xueqi Cheng
We focus on the decision-based black-box attack setting, where the attackers cannot directly get access to the model information, but can only query the target model to obtain the rank positions of the partial retrieved list.
no code implementations • 22 Mar 2022 • Zhaohui Wang, Qi Cao, HuaWei Shen, Bingbing Xu, Xueqi Cheng
The expressive power of message passing GNNs is upper-bounded by Weisfeiler-Lehman (WL) test.
1 code implementation • ACL 2022 • Zixuan Li, Saiping Guan, Xiaolong Jin, Weihua Peng, Yajuan Lyu, Yong Zhu, Long Bai, Wei Li, Jiafeng Guo, Xueqi Cheng
Furthermore, these models are all trained offline, which cannot well adapt to the changes of evolutional patterns from then on.
no code implementations • CVPR 2022 • Sihao Yu, Jiafeng Guo, Ruqing Zhang, Yixing Fan, Zizhen Wang, Xueqi Cheng
By reducing the weights of the majority classes, such instances would become more difficult to learn and hurt the overall performance consequently.
1 code implementation • 31 Dec 2021 • Saiping Guan, Xueqi Cheng, Long Bai, Fujun Zhang, Zixuan Li, Yutao Zeng, Xiaolong Jin, Jiafeng Guo
Besides entity-centric knowledge, usually organized as Knowledge Graph (KG), events are also an essential kind of knowledge in the world, which trigger the spring up of event-centric knowledge representation form like Event KG (EKG).
no code implementations • 29 Sep 2021 • Jingwei Liu, Yi Gu, Shentong Mo, Zhun Sun, Shumin Han, Jiafeng Guo, Xueqi Cheng
In self-supervised learning frameworks, deep networks are optimized to align different views of an instance that contains the similar visual semantic information.
1 code implementation • EMNLP 2021 • Long Bai, Saiping Guan, Jiafeng Guo, Zixuan Li, Xiaolong Jin, Xueqi Cheng
In this paper, we propose a Transformer-based model, called MCPredictor, which integrates deep event-level and script-level information for script event prediction.
1 code implementation • EMNLP 2021 • Fei Xiao, Liang Pang, Yanyan Lan, Yan Wang, HuaWei Shen, Xueqi Cheng
The proposed transductive learning approach is general and effective to the task of unsupervised style transfer, and we will apply it to the other two typical methods in the future.
1 code implementation • EMNLP 2021 • Yunchang Zhu, Liang Pang, Yanyan Lan, HuaWei Shen, Xueqi Cheng
Information seeking is an essential step for open-domain question answering to efficiently gather evidence from a large corpus.
Ranked #3 on
Question Answering
on HotpotQA
1 code implementation • 30 Aug 2021 • Shuchang Tao, Qi Cao, HuaWei Shen, JunJie Huang, Yunfan Wu, Xueqi Cheng
In this paper, we focus on an extremely limited scenario of single node injection evasion attack, i. e., the attacker is only allowed to inject one single node during the test phase to hurt GNN's performance.
1 code implementation • 22 Aug 2021 • JunJie Huang, HuaWei Shen, Qi Cao, Shuchang Tao, Xueqi Cheng
Signed bipartite networks are different from classical signed networks, which contain two different node sets and signed links between two node sets.
no code implementations • 16 Aug 2021 • Lijuan Chen, Yanyan Lan, Liang Pang, Jiafeng Guo, Xueqi Cheng
We further extend these constraints to the semantic settings, which are shown to be better satisfied for all the deep text matching models.
2 code implementations • 11 Aug 2021 • Jiangui Chen, Ruqing Zhang, Jiafeng Guo, Yixing Fan, Xueqi Cheng
A possible solution to this dilemma is a new approach known as federated learning, which is a privacy-preserving machine learning technique over distributed datasets.
no code implementations • 11 Aug 2021 • Chen Wu, Ruqing Zhang, Jiafeng Guo, Yixing Fan, Xueqi Cheng
So we raise the question in this work: Are neural ranking models robust?
2 code implementations • 21 Jul 2021 • Liang Hou, Qi Cao, HuaWei Shen, Siyuan Pan, Xiaoshuang Li, Xueqi Cheng
Specifically, the proposed auxiliary discriminative classifier becomes generator-aware by recognizing the class-labels of the real data and the generated data discriminatively.
Ranked #1 on
Conditional Image Generation
on Tiny ImageNet
no code implementations • 18 Jul 2021 • Yinqiong Cai, Yixing Fan, Jiafeng Guo, Ruqing Zhang, Yanyan Lan, Xueqi Cheng
However, these methods often lose the discriminative power as term-based methods, thus introduce noise during retrieval and hurt the recall performance.
1 code implementation • 12 Jul 2021 • Yunfan Wu, Qi Cao, HuaWei Shen, Shuchang Tao, Xueqi Cheng
INMO generates the inductive embeddings for users (items) by characterizing their interactions with some template items (template users), instead of employing an embedding lookup table.
2 code implementations • NeurIPS 2021 • Liang Hou, HuaWei Shen, Qi Cao, Xueqi Cheng
Recently, transformation-based self-supervised learning has been applied to generative adversarial networks (GANs) to mitigate catastrophic forgetting in the discriminator by introducing a stationary learning environment.
no code implementations • ACL 2021 • Zixuan Li, Xiaolong Jin, Saiping Guan, Wei Li, Jiafeng Guo, Yuanzhuo Wang, Xueqi Cheng
Specifically, at the clue searching stage, CluSTeR learns a beam search policy via reinforcement learning (RL) to induce multiple clues from historical facts.
1 code implementation • 21 Apr 2021 • Zixuan Li, Xiaolong Jin, Wei Li, Saiping Guan, Jiafeng Guo, HuaWei Shen, Yuanzhuo Wang, Xueqi Cheng
To capture these properties effectively and efficiently, we propose a novel Recurrent Evolution network based on Graph Convolution Network (GCN), called RE-GCN, which learns the evolutional representations of entities and relations at each timestamp by modeling the KG sequence recurrently.
1 code implementation • 21 Apr 2021 • Saiping Guan, Xiaolong Jin, Jiafeng Guo, Yuanzhuo Wang, Xueqi Cheng
However, they mainly focus on link prediction on binary relational data, where facts are usually represented as triples in the form of (head entity, relation, tail entity).
1 code implementation • 20 Apr 2021 • Xinyu Ma, Jiafeng Guo, Ruqing Zhang, Yixing Fan, Yingyan Li, Xueqi Cheng
The basic idea of PROP is to construct the \textit{representative words prediction} (ROP) task for pre-training inspired by the query likelihood model.
no code implementations • 19 Apr 2021 • Jiangli Shao, Yongqing Wang, Hao Gao, HuaWei Shen, Yangyang Li, Xueqi Cheng
However, encouraged by online services, users would also post asymmetric information across networks, such as geo-locations and texts.
1 code implementation • 2 Apr 2021 • Changying Hao, Liang Pang, Yanyan Lan, Yan Wang, Jiafeng Guo, Xueqi Cheng
In the sketch stage, a skeleton is extracted by removing words which are conflict to the counterfactual condition, from the original ending.
no code implementations • 19 Mar 2021 • Hao Gao, Yongqing Wang, Shanshan Lyu, HuaWei Shen, Xueqi Cheng
However, the low quality of observed user data confuses the judgment on anchor links, resulting in the matching collision problem in practice.
1 code implementation • 8 Mar 2021 • Jiafeng Guo, Yinqiong Cai, Yixing Fan, Fei Sun, Ruqing Zhang, Xueqi Cheng
We believe it is the right time to survey current status, learn from existing methods, and gain some insights for future development.
no code implementations • 1 Mar 2021 • Yixing Fan, Jiafeng Guo, Xinyu Ma, Ruqing Zhang, Yanyan Lan, Xueqi Cheng
We employ 16 linguistic tasks to probe a unified retrieval model over these three retrieval tasks to answer this question.
no code implementations • 25 Feb 2021 • Chen Wu, Ruqing Zhang, Jiafeng Guo, Yixing Fan, Yanyan Lan, Xueqi Cheng
One is the widely adopted metric such as F1 which acts as a balanced objective, and the other is the best F1 under some minimal recall constraint which represents a typical objective in professional search.
1 code implementation • 16 Jan 2021 • Liang Pang, Yanyan Lan, Xueqi Cheng
However, these models designed for short texts cannot well address the long-form text matching problem, because there are many contexts in long-form texts can not be directly aligned with each other, and it is difficult for existing models to capture the key matching signals from such noisy data.
no code implementations • 15 Jan 2021 • Fabin Shi, Nathan Aden, Shengda Huang, Neil Johnson, Xiaoqian Sun, Jinhua Gao, Li Xu, HuaWei Shen, Xueqi Cheng, Chaoming Song
Understanding the emergence of universal features such as the stylized facts in markets is a long-standing challenge that has drawn much attention from economists and physicists.
1 code implementation • 7 Jan 2021 • JunJie Huang, HuaWei Shen, Liang Hou, Xueqi Cheng
Guided by related sociological theories, we propose a novel Signed Directed Graph Neural Networks model named SDGNN to learn node embeddings for signed directed networks.
no code implementations • 1 Jan 2021 • Xu Bingbing, HuaWei Shen, Qi Cao, YuanHao Liu, Keting Cen, Xueqi Cheng
For a target node, diverse sampling offers it diverse neighborhoods, i. e., rooted sub-graphs, and the representation of target node is finally obtained via aggregating the representation of diverse neighborhoods obtained using any GNN model.
1 code implementation • 10 Dec 2020 • Liang Hou, Zehuan Yuan, Lei Huang, HuaWei Shen, Xueqi Cheng, Changhu Wang
In particular, for real-time generation tasks, different devices require generators of different sizes due to varying computing power.
1 code implementation • 3 Dec 2020 • Jiabao Zhang, Shenghua Liu, Wenting Hou, Siddharth Bhatia, HuaWei Shen, Wenjian Yu, Xueqi Cheng
Therefore, we propose a fast streaming algorithm, AugSplicing, which can detect the top dense blocks by incrementally splicing the previous detection with the incoming ones in new tuples, avoiding re-runs over all the history data at every tracking time step.
no code implementations • COLING 2020 • Yutao Zeng, Xiaolong Jin, Saiping Guan, Jiafeng Guo, Xueqi Cheng
To resolve event coreference, existing methods usually calculate the similarities between event mentions and between specific kinds of event arguments.
1 code implementation • CVPR 2021 • Xin Hong, Yanyan Lan, Liang Pang, Jiafeng Guo, Xueqi Cheng
Following this definition, a new dataset namely TRANCE is constructed on the basis of CLEVR, including three levels of settings, i. e.~Basic (single-step transformation), Event (multi-step transformation), and View (multi-step transformation with variant views).
1 code implementation • 20 Oct 2020 • Xinyu Ma, Jiafeng Guo, Ruqing Zhang, Yixing Fan, Xiang Ji, Xueqi Cheng
Recently pre-trained language representation models such as BERT have shown great success when fine-tuned on downstream tasks including information retrieval (IR).
no code implementations • 19 Oct 2020 • Houquan Zhou, Shenghua Liu, Kyuhan Lee, Kijung Shin, HuaWei Shen, Xueqi Cheng
As a solution, graph summarization, which aims to find a compact representation that preserves the important properties of a given graph, has received much attention, and numerous algorithms have been developed for it.
Social and Information Networks
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Wanqing Cui, Yanyan Lan, Liang Pang, Jiafeng Guo, Xueqi Cheng
This paper proposes a novel approach to learn commonsense from images, instead of limited raw texts or costly constructed knowledge bases, for the commonsense reasoning problem in NLP.
1 code implementation • CIKM 2017 • Qi Cao, HuaWei Shen, Keting Cen, Wentao Ouyang, Xueqi Cheng
In this paper, we propose DeepHawkes to combat the defects of existing methods, leveraging end-to-end deep learning to make an analogy to interpretable factors of Hawkes process — a widely-used generative process to model information cascade.
no code implementations • NeurIPS 2012 • Yanyan Lan, Jiafeng Guo, Xueqi Cheng, Tie-Yan Liu
This paper is concerned with the statistical consistency of ranking methods.