1 code implementation • 25 Apr 2022 • Yunchang Zhu, Liang Pang, Yanyan Lan, HuaWei Shen, Xueqi Cheng
Ideally, if a PRF model can distinguish between irrelevant and relevant information in the feedback, the more feedback documents there are, the better the revised query will be.
no code implementations • 18 Apr 2022 • Quan Ding, Shenghua Liu, Bin Zhou, HuaWei Shen, Xueqi Cheng
Given a multivariate big time series, can we detect anomalies as soon as they occur?
no code implementations • 6 Apr 2022 • Shicheng Xu, Liang Pang, HuaWei Shen, Xueqi Cheng
It is because the end-to-end supervised learning on task-specific dataset makes model overemphasize the data sample bias and task-specific signals instead of the essential matching signals, which ruins the generalization of model to different tasks.
no code implementations • 26 Mar 2022 • Sha Yuan, Hanyu Zhao, Shuai Zhao, Jiahong Leng, Yangxiao Liang, Xiaozhi Wang, Jifan Yu, Xin Lv, Zhou Shao, Jiaao He, Yankai Lin, Xu Han, Zhenghao Liu, Ning Ding, Yongming Rao, Yizhao Gao, Liang Zhang, Ming Ding, Cong Fang, Yisen Wang, Mingsheng Long, Jing Zhang, Yinpeng Dong, Tianyu Pang, Peng Cui, Lingxiao Huang, Zheng Liang, HuaWei Shen, HUI ZHANG, Quanshi Zhang, Qingxiu Dong, Zhixing Tan, Mingxuan Wang, Shuo Wang, Long Zhou, Haoran Li, Junwei Bao, Yingwei Pan, Weinan Zhang, Zhou Yu, Rui Yan, Chence Shi, Minghao Xu, Zuobai Zhang, Guoqiang Wang, Xiang Pan, Mengjie Li, Xiaoyu Chu, Zijun Yao, Fangwei Zhu, Shulin Cao, Weicheng Xue, Zixuan Ma, Zhengyan Zhang, Shengding Hu, Yujia Qin, Chaojun Xiao, Zheni Zeng, Ganqu Cui, Weize Chen, Weilin Zhao, Yuan YAO, Peng Li, Wenzhao Zheng, Wenliang Zhao, Ziyi Wang, Borui Zhang, Nanyi Fei, Anwen Hu, Zenan Ling, Haoyang Li, Boxi Cao, Xianpei Han, Weidong Zhan, Baobao Chang, Hao Sun, Jiawen Deng, Chujie Zheng, Juanzi Li, Lei Hou, Xigang Cao, Jidong Zhai, Zhiyuan Liu, Maosong Sun, Jiwen Lu, Zhiwu Lu, Qin Jin, Ruihua Song, Ji-Rong Wen, Zhouchen Lin, LiWei Wang, Hang Su, Jun Zhu, Zhifang Sui, Jiajun Zhang, Yang Liu, Xiaodong He, Minlie Huang, Jian Tang, Jie Tang
With the rapid development of deep learning, training Big Models (BMs) for multiple downstream tasks becomes a popular paradigm.
no code implementations • 22 Mar 2022 • Zhaohui Wang, Qi Cao, HuaWei Shen, Bingbing Xu, Xueqi Cheng
The expressive power of message passing GNNs is upper-bounded by Weisfeiler-Lehman (WL) test.
1 code implementation • EMNLP 2021 • Fei Xiao, Liang Pang, Yanyan Lan, Yan Wang, HuaWei Shen, Xueqi Cheng
The proposed transductive learning approach is general and effective to the task of unsupervised style transfer, and we will apply it to the other two typical methods in the future.
1 code implementation • EMNLP 2021 • Yunchang Zhu, Liang Pang, Yanyan Lan, HuaWei Shen, Xueqi Cheng
Information seeking is an essential step for open-domain question answering to efficiently gather evidence from a large corpus.
Ranked #2 on
Question Answering
on HotpotQA
1 code implementation • 30 Aug 2021 • Shuchang Tao, Qi Cao, HuaWei Shen, JunJie Huang, Yunfan Wu, Xueqi Cheng
In this paper, we focus on an extremely limited scenario of single node injection evasion attack, i. e., the attacker is only allowed to inject one single node during the test phase to hurt GNN's performance.
1 code implementation • 22 Aug 2021 • JunJie Huang, HuaWei Shen, Qi Cao, Shuchang Tao, Xueqi Cheng
Signed bipartite networks are different from classical signed networks, which contain two different node sets and signed links between two node sets.
1 code implementation • 21 Jul 2021 • Liang Hou, Qi Cao, HuaWei Shen, Siyuan Pan, Xiaoshuang Li, Xueqi Cheng
Specifically, the proposed auxiliary \textit{discriminative} classifier becomes generator-aware by recognizing the labels of the real data and the generated data \textit{discriminatively}.
1 code implementation • 12 Jul 2021 • Yunfan Wu, Qi Cao, HuaWei Shen, Shuchang Tao, Xueqi Cheng
INMO generates the inductive embeddings for users (items) by characterizing their interactions with some template items (template users), instead of employing an embedding lookup table.
1 code implementation • NeurIPS 2021 • Liang Hou, HuaWei Shen, Qi Cao, Xueqi Cheng
Recently, transformation-based self-supervised learning has been applied to generative adversarial networks (GANs) to mitigate catastrophic forgetting in the discriminator by introducing a stationary learning environment.
1 code implementation • 21 Apr 2021 • Zixuan Li, Xiaolong Jin, Wei Li, Saiping Guan, Jiafeng Guo, HuaWei Shen, Yuanzhuo Wang, Xueqi Cheng
To capture these properties effectively and efficiently, we propose a novel Recurrent Evolution network based on Graph Convolution Network (GCN), called RE-GCN, which learns the evolutional representations of entities and relations at each timestamp by modeling the KG sequence recurrently.
no code implementations • 19 Apr 2021 • Jiangli Shao, Yongqing Wang, Hao Gao, HuaWei Shen, Yangyang Li, Xueqi Cheng
However, encouraged by online services, users would also post asymmetric information across networks, such as geo-locations and texts.
no code implementations • 19 Mar 2021 • Hao Gao, Yongqing Wang, Shanshan Lyu, HuaWei Shen, Xueqi Cheng
However, the low quality of observed user data confuses the judgment on anchor links, resulting in the matching collision problem in practice.
no code implementations • 15 Jan 2021 • Fabin Shi, Nathan Aden, Shengda Huang, Neil Johnson, Xiaoqian Sun, Jinhua Gao, Li Xu, HuaWei Shen, Xueqi Cheng, Chaoming Song
Understanding the emergence of universal features such as the stylized facts in markets is a long-standing challenge that has drawn much attention from economists and physicists.
1 code implementation • 7 Jan 2021 • JunJie Huang, HuaWei Shen, Liang Hou, Xueqi Cheng
Guided by related sociological theories, we propose a novel Signed Directed Graph Neural Networks model named SDGNN to learn node embeddings for signed directed networks.
1 code implementation • 4 Jan 2021 • Deyu Bo, Xiao Wang, Chuan Shi, HuaWei Shen
For a deeper understanding, we theoretically analyze the roles of low-frequency signals and high-frequency signals on learning node representations, which further explains why FAGCN can perform well on different types of networks.
no code implementations • 1 Jan 2021 • Xu Bingbing, HuaWei Shen, Qi Cao, YuanHao Liu, Keting Cen, Xueqi Cheng
For a target node, diverse sampling offers it diverse neighborhoods, i. e., rooted sub-graphs, and the representation of target node is finally obtained via aggregating the representation of diverse neighborhoods obtained using any GNN model.
1 code implementation • 21 Dec 2020 • Chao Yang, Su Feng, Dongsheng Li, HuaWei Shen, Guoqing Wang, Bin Jiang
Many works concentrate on how to reduce language bias which makes models answer questions ignoring visual content and language context.
no code implementations • 20 Dec 2020 • Chao Yang, Guoqing Wang, Dongsheng Li, HuaWei Shen, Su Feng, Bin Jiang
Reference expression comprehension (REC) aims to find the location that the phrase refer to in a given image.
1 code implementation • 10 Dec 2020 • Liang Hou, Zehuan Yuan, Lei Huang, HuaWei Shen, Xueqi Cheng, Changhu Wang
In particular, for real-time generation tasks, different devices require generators of different sizes due to varying computing power.
1 code implementation • 3 Dec 2020 • Jiabao Zhang, Shenghua Liu, Wenting Hou, Siddharth Bhatia, HuaWei Shen, Wenjian Yu, Xueqi Cheng
Therefore, we propose a fast streaming algorithm, AugSplicing, which can detect the top dense blocks by incrementally splicing the previous detection with the incoming ones in new tuples, avoiding re-runs over all the history data at every tracking time step.
no code implementations • 19 Oct 2020 • Houquan Zhou, Shenghua Liu, Kyuhan Lee, Kijung Shin, HuaWei Shen, Xueqi Cheng
As a solution, graph summarization, which aims to find a compact representation that preserves the important properties of a given graph, has received much attention, and numerous algorithms have been developed for it.
Social and Information Networks
1 code implementation • CIKM 2017 • Qi Cao, HuaWei Shen, Keting Cen, Wentao Ouyang, Xueqi Cheng
In this paper, we propose DeepHawkes to combat the defects of existing methods, leveraging end-to-end deep learning to make an analogy to interpretable factors of Hawkes process — a widely-used generative process to model information cascade.
1 code implementation • 1 May 2017 • Yongqing Wang, HuaWei Shen, Shenghua Liu, Jinhua Gao, and Xueqi Cheng
However, for cascade prediction, each cascade generally corresponds to a diffusion tree, causing cross-dependence in cascade— one sharing behavior could be triggered by its non-immediate predecessor in the memory chain.