no code implementations • 3 Mar 2023 • Xingxuan Zhang, Renzhe Xu, Han Yu, Hao Zou, Peng Cui
Yet the current definition of flatness discussed in SAM and its follow-ups are limited to the zeroth-order flatness (i. e., the worst-case loss within a perturbation radius).
no code implementations • 10 Feb 2023 • Peng Cui, Yang Yue, Zhijie Deng, Jun Zhu
Deep neural networks (DNNs) have achieved remarkable success in a variety of computer vision tasks, where massive labeled images are routinely required for model optimization.
1 code implementation • 24 Jan 2023 • Xiao Zhou, Yong Lin, Renjie Pi, Weizhong Zhang, Renzhe Xu, Peng Cui, Tong Zhang
The overfitting issue is addressed by considering a bilevel formulation to search for the sample reweighting, in which the generalization complexity depends on the search space of sample weights instead of the model size.
no code implementations • 2 Dec 2022 • Han Yu, Peng Cui, Yue He, Zheyan Shen, Yong Lin, Renzhe Xu, Xingxuan Zhang
The problem of covariate-shift generalization has attracted intensive research attention.
no code implementations • 3 Nov 2022 • Lipeng Gu, Xuefeng Yan, Peng Cui, Lina Gong, Haoran Xie, Fu Lee Wang, Jin Qin, Mingqiang Wei
There is a trend to fuse multi-modal information for 3D object detection (3OD).
no code implementations • 28 Oct 2022 • Ming Tong, Yongzhen Wang, Peng Cui, Xuefeng Yan, Mingqiang Wei
Semi-UFormer can well leverage both the real-world hazy images and their uncertainty guidance information.
1 code implementation • 23 Oct 2022 • Zhijie Deng, Jiaxin Shi, Hao Zhang, Peng Cui, Cewu Lu, Jun Zhu
In this paper, we introduce a scalable method for learning structured, adaptive-length deep representations.
1 code implementation • 15 Oct 2022 • Renzhe Xu, Xingxuan Zhang, Bo Li, Yafeng Zhang, Xiaolong Chen, Peng Cui
In this paper, we assume that each consumer can purchase multiple products at will.
no code implementations • 7 Jun 2022 • Jiashuo Liu, Jiayun Wu, Jie Peng, Zheyan Shen, Bo Li, Peng Cui
We reformulate the invariant learning problem under latent heterogeneity into a relaxed form that pursues the distributional invariance, based on which we propose our novel Distributionally Invariant Learning (DIL) framework as well as two implementations named DIL-MMD and DIL-KL.
no code implementations • 20 May 2022 • Bingzhe Wu, Jintang Li, Junchi Yu, Yatao Bian, Hengtong Zhang, Chaochao Chen, Chengbin Hou, Guoji Fu, Liang Chen, Tingyang Xu, Yu Rong, Xiaolin Zheng, Junzhou Huang, Ran He, Baoyuan Wu, Guangyu Sun, Peng Cui, Zibin Zheng, Zhe Liu, Peilin Zhao
Deep graph learning has achieved remarkable progresses in both business and scientific areas ranging from finance and e-commerce, to drug and advanced material discovery.
2 code implementations • 17 Apr 2022 • Xingxuan Zhang, Yue He, Renzhe Xu, Han Yu, Zheyan Shen, Peng Cui
Most current evaluation methods for domain generalization (DG) adopt the leave-one-out strategy as a compromise on the limited number of domains.
no code implementations • 27 Mar 2022 • Xingxuan Zhang, Zekai Xu, Renzhe Xu, Jiashuo Liu, Peng Cui, Weitao Wan, Chong Sun, Chen Li
Despite the striking performance achieved by modern detectors when training and test data are sampled from the same or similar distribution, the generalization ability of detectors under unknown distribution shifts remains hardly studied.
no code implementations • 26 Mar 2022 • Sha Yuan, Hanyu Zhao, Shuai Zhao, Jiahong Leng, Yangxiao Liang, Xiaozhi Wang, Jifan Yu, Xin Lv, Zhou Shao, Jiaao He, Yankai Lin, Xu Han, Zhenghao Liu, Ning Ding, Yongming Rao, Yizhao Gao, Liang Zhang, Ming Ding, Cong Fang, Yisen Wang, Mingsheng Long, Jing Zhang, Yinpeng Dong, Tianyu Pang, Peng Cui, Lingxiao Huang, Zheng Liang, HuaWei Shen, HUI ZHANG, Quanshi Zhang, Qingxiu Dong, Zhixing Tan, Mingxuan Wang, Shuo Wang, Long Zhou, Haoran Li, Junwei Bao, Yingwei Pan, Weinan Zhang, Zhou Yu, Rui Yan, Chence Shi, Minghao Xu, Zuobai Zhang, Guoqiang Wang, Xiang Pan, Mengjie Li, Xiaoyu Chu, Zijun Yao, Fangwei Zhu, Shulin Cao, Weicheng Xue, Zixuan Ma, Zhengyan Zhang, Shengding Hu, Yujia Qin, Chaojun Xiao, Zheni Zeng, Ganqu Cui, Weize Chen, Weilin Zhao, Yuan YAO, Peng Li, Wenzhao Zheng, Wenliang Zhao, Ziyi Wang, Borui Zhang, Nanyi Fei, Anwen Hu, Zenan Ling, Haoyang Li, Boxi Cao, Xianpei Han, Weidong Zhan, Baobao Chang, Hao Sun, Jiawen Deng, Chujie Zheng, Juanzi Li, Lei Hou, Xigang Cao, Jidong Zhai, Zhiyuan Liu, Maosong Sun, Jiwen Lu, Zhiwu Lu, Qin Jin, Ruihua Song, Ji-Rong Wen, Zhouchen Lin, LiWei Wang, Hang Su, Jun Zhu, Zhifang Sui, Jiajun Zhang, Yang Liu, Xiaodong He, Minlie Huang, Jian Tang, Jie Tang
With the rapid development of deep learning, training Big Models (BMs) for multiple downstream tasks becomes a popular paradigm.
1 code implementation • 11 Mar 2022 • Yong Lin, Shengyu Zhu, Lu Tan, Peng Cui
When data are divided into distinct environments according to the heterogeneity, recent invariant learning methods have proposed to learn robust and invariant models based on this environment partition.
1 code implementation • 9 Feb 2022 • Renzhe Xu, Xingxuan Zhang, Peng Cui, Bo Li, Zheyan Shen, Jiazheng Xu
Personalized pricing is a business strategy to charge different prices to individual consumers based on their characteristics and behaviors.
1 code implementation • 8 Feb 2022 • Yue He, Zimu Wang, Peng Cui, Hao Zou, Yafeng Zhang, Qiang Cui, Yong Jiang
In spite of the tremendous development of recommender system owing to the progressive capability of machine learning recently, the current recommender system is still vulnerable to the distribution shift of users and items in realistic scenarios, leading to the sharp decline of performance in testing environments.
no code implementations • 5 Feb 2022 • Xiangmeng Wang, Qian Li, Dianer Yu, Peng Cui, Zhichao Wang, Guandong Xu
Traditional recommendation models trained on observational interaction data have generated large impacts in a wide range of applications, it faces bias problems that cover users' true intent and thus deteriorate the recommendation effectiveness.
no code implementations • 23 Dec 2021 • Ziwei Zhang, Xin Wang, Zeyang Zhang, Peng Cui, Wenwu Zhu
Based on the experimental results, we advocate that TinvNN should be considered a new starting point and an essential baseline for further studies of transformation-invariant geometric deep learning.
no code implementations • NeurIPS 2021 • Jiashuo Liu, Zheyuan Hu, Peng Cui, Bo Li, Zheyan Shen
The ability to generalize under distributional shifts is essential to reliable machine learning, while models optimized with empirical risk minimization usually fail on non-$i. i. d$ testing data.
no code implementations • 20 Nov 2021 • Shaohua Fan, Xiao Wang, Chuan Shi, Peng Cui, Bai Wang
Graph Neural Networks (GNNs) are proposed without considering the agnostic distribution shifts between training and testing graphs, inducing the degeneration of the generalization ability of GNNs on Out-Of-Distribution (OOD) settings.
1 code implementation • 3 Nov 2021 • Renzhe Xu, Xingxuan Zhang, Zheyan Shen, Tong Zhang, Peng Cui
Afterward, we prove that under ideal conditions, independence-driven importance weighting algorithms could identify the variables in this set.
no code implementations • 3 Nov 2021 • Ke Tu, Peng Cui, Daixin Wang, Zhiqiang Zhang, Jun Zhou, Yuan Qi, Wenwu Zhu
Knowledge graph is generally incorporated into recommender systems to improve overall performance.
1 code implementation • 24 Oct 2021 • Jiashuo Liu, Zheyuan Hu, Peng Cui, Bo Li, Zheyan Shen
The ability to generalize under distributional shifts is essential to reliable machine learning, while models optimized with empirical risk minimization usually fail on non-$i. i. d$ testing data.
no code implementations • 22 Oct 2021 • Peng Cui, Dongyao Hu, Le Hu
Question answering (QA) is a high-level ability of natural language processing.
no code implementations • Findings (EMNLP) 2021 • Peng Cui, Le Hu
A critical point of multi-document summarization (MDS) is to learn the relations among various documents.
no code implementations • 31 Aug 2021 • Zheyan Shen, Jiashuo Liu, Yue He, Xingxuan Zhang, Renzhe Xu, Han Yu, Peng Cui
Classic machine learning methods are built on the $i. i. d.$ assumption that training and testing data are independent and identically distributed.
no code implementations • CVPR 2022 • Xingxuan Zhang, Linjun Zhou, Renzhe Xu, Peng Cui, Zheyan Shen, Haoxin Liu
Domain generalization (DG) aims to help models trained on a set of source domains generalize better on unseen target domains.
no code implementations • 30 Jun 2021 • Jiashuo Liu, Zheyan Shen, Peng Cui, Linjun Zhou, Kun Kuang, Bo Li
In this paper, we propose a novel Stable Adversarial Learning (SAL) algorithm that leverages heterogeneous data sources to construct a more practical uncertainty set and conduct differentiated robustness optimization, where covariates are differentiated according to the stability of their correlations with the target.
no code implementations • 30 Jun 2021 • Yang Li, Yadan Luo, Zheng Zhang, Shazia W. Sadiq, Peng Cui
It aims at suggesting the next POI to a user in spatial and temporal context, which is a practical yet challenging task in various applications.
1 code implementation • NAACL 2021 • Peng Cui, Le Hu
Neural-based summarization models suffer from the length limitation of text encoder.
no code implementations • 26 May 2021 • Heng Chang, Yu Rong, Tingyang Xu, Wenbing Huang, Honglei Zhang, Peng Cui, Xin Wang, Wenwu Zhu, Junzhou Huang
We investigate the theoretical connections between graph signal processing and graph embedding models and formulate the graph embedding model as a general graph signal process with a corresponding graph filter.
1 code implementation • 9 May 2021 • Jiashuo Liu, Zheyuan Hu, Peng Cui, Bo Li, Zheyan Shen
In this paper, we propose Heterogeneous Risk Minimization (HRM) framework to achieve joint learning of latent heterogeneity among the data and invariant relationship, which leads to stable prediction despite distributional shifts.
2 code implementations • CVPR 2021 • Xingxuan Zhang, Peng Cui, Renzhe Xu, Linjun Zhou, Yue He, Zheyan Shen
Approaches based on deep neural networks have achieved striking performance when testing data and training data share similar distribution, but can significantly fail otherwise.
Ranked #18 on
Domain Generalization
on VLCS
no code implementations • 7 Apr 2021 • Kai Wang, Zhene Zou, Qilin Deng, Runze Wu, Jianrong Tao, Changjie Fan, Liang Chen, Peng Cui
As a part of the value function, free from the sparse and high-variance reward signals, a high-capacity reward-independent world model is trained to simulate complex environmental dynamics under a certain goal.
Model-based Reinforcement Learning
Recommendation Systems
+2
no code implementations • 28 Mar 2021 • Peng Cui, Zhijie Deng, WenBo Hu, Jun Zhu
It is critical yet challenging for deep learning models to properly characterize uncertainty that is pervasive in real-world environments.
no code implementations • 7 Feb 2021 • Ruobing Xie, Qi Liu, Shukai Liu, Ziwei Zhang, Peng Cui, Bo Zhang, Leyu Lin
In this paper, we propose a novel Heterogeneous graph neural network framework for diversified recommendation (GraphDR) in matching to improve both recommendation accuracy and diversity.
no code implementations • 28 Jan 2021 • Meiqi Zhu, Xiao Wang, Chuan Shi, Houye Ji, Peng Cui
Graph Neural Networks (GNNs) have received considerable attention on graph-structured data learning for a wide variety of tasks.
no code implementations • 1 Jan 2021 • Xingxuan Zhang, Peng Cui, Renzhe Xu, Yue He, Linjun Zhou, Zheyan Shen
We propose to address this problem by removing the dependencies between features via reweighting training samples, which results in a more balanced distribution and helps deep models get rid of spurious correlations and, in turn, concentrate more on the true connection between features and labels.
no code implementations • NeurIPS 2020 • Hao Zou, Peng Cui, Bo Li, Zheyan Shen, Jianxin Ma, Hongxia Yang, Yue He
Estimating counterfactual outcome of different treatments from observational data is an important problem to assist decision making in a variety of fields.
1 code implementation • 23 Oct 2020 • Hande Dong, Jiawei Chen, Fuli Feng, Xiangnan He, Shuxian Bi, Zhaolin Ding, Peng Cui
The original design of Graph Convolution Network (GCN) couples feature transformation and neighborhood aggregation for node representation learning.
no code implementations • 15 Oct 2020 • Peng Cui, Le Hu, Yuanchao Liu
We then employ a novel gated graph attention network to encode the constructed graph for sentence matching.
no code implementations • COLING 2020 • Peng Cui, Le Hu, Yuanchao Liu
They also often ignore the effect of topical information on capturing important contents.
1 code implementation • 5 Sep 2020 • Ziwei Zhang, Chenhao Niu, Peng Cui, Jian Pei, Bo Zhang, Wenwu Zhu
Graph neural networks (GNNs) are emerging machine learning models on graphs.
no code implementations • CVPR 2022 • Linjun Zhou, Peng Cui, Yinan Jiang, Shiqiang Yang
In this paper, we propose a novel setting of transferable black-box attack: attackers may use external information from a pre-trained model with available network parameters, however, different from previous studies, no additional training data is permitted to further change or tune the pre-trained model.
1 code implementation • 23 Aug 2020 • Jianxin Ma, Chang Zhou, Hongxia Yang, Peng Cui, Xin Wang, Wenwu Zhu
There exist two challenges: i) reconstructing a future sequence containing many behaviors is exponentially harder than reconstructing a single next behavior, which can lead to difficulty in convergence, and ii) the sequence of all future behaviors can involve many intentions, not all of which may be predictable from the sequence of earlier behaviors.
1 code implementation • SIGKDD International Conference on Knowledge Discovery & Data Mining 2020 • Linxia Gong, Xiaochuan Feng, Dezhi Ye, Hao Li, Runze Wu, Jianrong Tao, Changjie Fan, Peng Cui
OptMatch contains an offline learning stage and an online planning stage.
no code implementations • 5 Jul 2020 • Xiao Wang, Meiqi Zhu, Deyu Bo, Peng Cui, Chuan Shi, Jian Pei
We tackle the challenge and propose an adaptive multi-channel graph convolutional networks for semi-supervised classification (AM-GCN).
1 code implementation • 18 Jun 2020 • Renzhe Xu, Peng Cui, Kun Kuang, Bo Li, Linjun Zhou, Zheyan Shen, Wei Cui
In practice, there frequently exist a certain set of variables we term as fair variables, which are pre-decision covariates such as users' choices.
no code implementations • NeurIPS 2020 • Peng Cui, Wen-Bo Hu, Jun Zhu
Accurate quantification of uncertainty is crucial for real-world applications of machine learning.
no code implementations • 9 Jun 2020 • Kun Kuang, Bo Li, Peng Cui, Yue Liu, Jianrong Tao, Yueting Zhuang, Fei Wu
By assuming the relationships between causal variables and response variable are invariant across data, to address this problem, we propose a conditional independence test based algorithm to separate those causal variables with a seed variable as priori, and adopt them for stable prediction.
no code implementations • 8 Jun 2020 • Ziwei Zhang, Peng Cui, Jian Pei, Xin Wang, Wenwu Zhu
Graph Neural Networks (GNNs) are emerging machine learning models on graphs.
no code implementations • 8 Jun 2020 • Jiashuo Liu, Zheyan Shen, Peng Cui, Linjun Zhou, Kun Kuang, Bo Li, Yishi Lin
Machine learning algorithms with empirical risk minimization are vulnerable under distributional shifts due to the greedy adoption of all the correlations found in training data.
1 code implementation • 2020 • Mingdong Ou, Peng Cui, Jian Pei, Ziwei Zhang, Wenwu Zhu
In particular, we develop a novel graph embedding algorithm, High-Order Proximity preserved Embedding (HOPE for short), which is scalable to preserve high-order proximities of large scale graphs and capable of capturing the asymmetric transitivity.
no code implementations • CVPR 2020 • Linjun Zhou, Peng Cui, Xu Jia, Shiqiang Yang, Qi Tian
Few-shot learning has attracted intensive research attention in recent years.
1 code implementation • 28 Feb 2020 • Daixin Wang, Jianbin Lin, Peng Cui, Quanhui Jia, Zhen Wang, Yanming Fang, Quan Yu, Jun Zhou, Shuang Yang, Yuan Qi
Additionally, among the network, only very few of the users are labelled, which also poses a great challenge for only utilizing labeled data to achieve a satisfied performance on fraud detection.
2 code implementations • 5 Feb 2020 • Deyu Bo, Xiao Wang, Chuan Shi, Meiqi Zhu, Emiao Lu, Peng Cui
The strength of deep clustering methods is to extract the useful representations from the data itself, rather than the structure of data, which receives scarce attention in representation learning.
no code implementations • 31 Jan 2020 • Kun Kuang, Ruoxuan Xiong, Peng Cui, Susan Athey, Bo Li
Then, these weights are used in the weighted regression to improve the accuracy of estimation on the effect of each variable, thus help to improve the stability of prediction across unknown test data.
no code implementations • 2 Jan 2020 • Wenwu Zhu, Xin Wang, Peng Cui
Mining graph data has become a popular research topic in computer science and has been widely studied in both academia and industry given the increasing amount of network data in the recent years.
no code implementations • 28 Nov 2019 • Zheyan Shen, Peng Cui, Tong Zhang, Kun Kuang
We consider the problem of learning linear prediction models with model misspecification bias.
1 code implementation • 20 Nov 2019 • Guanglin Niu, Yongfei Zhang, Bo Li, Peng Cui, Si Liu, Jingyang Li, Xiaowei Zhang
Representation learning on a knowledge graph (KG) is to embed entities and relations of a KG into low-dimensional continuous vector spaces.
no code implementations • NeurIPS 2019 • Jianxin Ma, Chang Zhou, Peng Cui, Hongxia Yang, Wenwu Zhu
Our approach achieves macro disentanglement by inferring the high-level concepts associated with user intentions (e. g., to buy a shirt or a cellphone), while capturing the preference of a user regarding the different concepts separately.
1 code implementation • 4 Aug 2019 • Heng Chang, Yu Rong, Tingyang Xu, Wenbing Huang, Honglei Zhang, Peng Cui, Wenwu Zhu, Junzhou Huang
To this end, we begin by investigating the theoretical connections between graph signal processing and graph embedding models in a principled way and formulate the graph embedding model as a general graph signal process with corresponding graph filter.
no code implementations • 7 Jun 2019 • Yue He, Zheyan Shen, Peng Cui
The experimental results demonstrate that NICO can well support the training of ConvNet model from scratch, and a batch balancing module can help ConvNets to perform better in Non-I. I. D.
2 code implementations • WWW 2019 2019 • Xiao Wang, Houye Ji, Chuan Shi, Bai Wang, Peng Cui, P. Yu, Yanfang Ye
With the learned importance from both node-level and semantic-level attention, the importance of node and meta-path can be fully considered.
Ranked #1 on
Heterogeneous Node Classification
on DBLP (PACT) 14k
Social and Information Networks
no code implementations • 1 Jan 2019 • Shengze Yu, Xin Wang, Wenwu Zhu, Peng Cui, Jingdong Wang
However, there remain two unsolved challenges: i) there exist inconsistencies in cross-platform association due to platform-specific disparity, and ii) data from distinct platforms may have different semantic granularities.
1 code implementation • 11 Dec 2018 • Ziwei Zhang, Peng Cui, Wenwu Zhu
Recently, substantial research efforts have been devoted to applying deep learning methods to graphs, resulting in beneficial advances in graph analysis techniques.
1 code implementation • 25 Sep 2018 • Yadan Luo, Zi Huang, Yang Li, Fumin Shen, Yang Yang, Peng Cui
Hashing techniques are in great demand for a wide range of real-world applications such as image retrieval and network compression.
no code implementations • 16 Jun 2018 • Kun Kuang, Ruoxuan Xiong, Peng Cui, Susan Athey, Bo Li
In this paper, we propose a novel Deep Global Balancing Regression (DGBR) algorithm to jointly optimize a deep auto-encoder model for feature selection and a global balancing model for stable prediction across unknown environments.
2 code implementations • 7 May 2018 • Ziwei Zhang, Peng Cui, Haoyang Li, Xiao Wang, Wenwu Zhu
Network embedding, which learns low-dimensional vector representation for nodes in the network, has attracted considerable research attention recently.
1 code implementation • 28 Nov 2017 • Ke Tu, Peng Cui, Xiao Wang, Fei Wang, Wenwu Zhu
These hyper-networks pose great challenges to existing network embedding methods when the hyperedges are indecomposable, that is to say, any subset of nodes in a hyperedge cannot form another hyperedge.
Social and Information Networks
1 code implementation • 27 Nov 2017 • Ziwei Zhang, Peng Cui, Jian Pei, Xiao Wang, Wenwu Zhu
By setting a maximum tolerated error as a threshold, we can trigger SVD restart automatically when the margin exceeds this threshold. We prove that the time complexity of our method is linear with respect to the number of local dynamic changes, and our method is general across different types of dynamic networks.
Social and Information Networks
no code implementations • 23 Nov 2017 • Peng Cui, Xiao Wang, Jian Pei, Wenwu Zhu
Network embedding assigns nodes in a network to low-dimensional representations and effectively preserves the network structure.
Social and Information Networks
no code implementations • CVPR 2019 • Linjun Zhou, Peng Cui, Shiqiang Yang, Wenwu Zhu, Qi Tian
We then propose an out-of-sample embedding method to learn the embedding of a new class represented by a few samples through its visual analogy with base classes and derive the classification parameters for the new class.
no code implementations • 22 Aug 2017 • Zheyan Shen, Peng Cui, Kun Kuang, Bo Li, Peixuan Chen
However, this ideal assumption is often violated in real applications, where selection bias may arise between training and testing process.
2 code implementations • AAAI 2017 • Xiao Wang, Peng Cui, Jing Wang, Jian Pei, Wenwu Zhu, Shiqiang Yang
While previous network embedding methods primarily preserve the microscopic structure, such as the first- and second-order proximities of nodes, the mesoscopic community structure, which is one of the most prominent feature of networks, is largely ignored.
no code implementations • 27 May 2015 • Linyun Yu, Peng Cui, Fei Wang, Chaoming Song, Shiqiang Yang
As cascades are typical dynamic processes, it is always interesting and important to predict the cascade size at any time, or predict the time when a cascade will reach a certain size (e. g. an threshold for outbreak).
Social and Information Networks Physics and Society