You need to log in to edit.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

You can create a new account if you don't have one.

Or, discuss a change on Slack.

no code implementations • 28 Apr 2022 • Binghui Xie, Chenhan Jin, Kaiwen Zhou, James Cheng, Wei Meng

Stochastic variance reduced methods have shown strong performance in solving finite-sum problems.

1 code implementation • ICLR 2022 • Yongqiang Chen, Han Yang, Yonggang Zhang, Kaili Ma, Tongliang Liu, Bo Han, James Cheng

Recently Graph Injection Attack (GIA) emerges as a practical attack scenario on Graph Neural Networks (GNNs), where the adversary can merely inject few malicious nodes instead of modifying existing nodes or edges, i. e., Graph Modification Attack (GMA).

no code implementations • 11 Feb 2022 • Yongqiang Chen, Yonggang Zhang, Han Yang, Kaili Ma, Binghui Xie, Tongliang Liu, Bo Han, James Cheng

Despite recent developments in using the invariance principle from causality to enable out-of-distribution (OOD) generalization on Euclidean data, e. g., images, studies on graph data are limited.

no code implementations • 30 Sep 2021 • Kaiwen Zhou, Anthony Man-Cho So, James Cheng

We show that stochastic acceleration can be achieved under the perturbed iterate framework (Mania et al., 2017) in asynchronous lock-free optimization, which leads to the optimal incremental gradient complexity for finite-sum objectives.

no code implementations • 29 Sep 2021 • Ruize Gao, Jiongxiao Wang, Kaiwen Zhou, Feng Liu, Binghui Xie, Gang Niu, Bo Han, James Cheng

The AutoAttack (AA) has been the most reliable method to evaluate adversarial robustness when considerable computational resources are available.

no code implementations • 30 Jun 2021 • Ruize Gao, Feng Liu, Kaiwen Zhou, Gang Niu, Bo Han, James Cheng

However, when tested on attacks different from the given attack simulated in training, the robustness may drop significantly (e. g., even worse than no reweighting).

no code implementations • Proceedings of the 2021 International Conference on Management of Data 2021 • Yidi Wu, Yuntao Gui, Tatiana Jin, James Cheng, Xiao Yan, Peiqi Yin, Yufei Cai, Bo Tang, Fan Yu

Graph neural networks (GNNs) have achieved remarkable performance in many graph analytics tasks such as node classification, link prediction and graph clustering.

no code implementations • NeurIPS 2021 • Kaiwen Zhou, Lai Tian, Anthony Man-Cho So, James Cheng

In convex optimization, the problem of finding near-stationary points has not been adequately studied yet, unlike other optimality measures such as the function value.

no code implementations • Proceedings of the Sixteenth European Conference on Computer Systems 2021 • Yidi Wu, Kaihao Ma, Zhenkun Cai, Tatiana Jin, Boyang Li, Chenguang Zheng, James Cheng, Fan Yu

Graph neural networks (GNNs) have achieved breakthrough performance in graph analytics such as node classification, link prediction and graph clustering.

1 code implementation • Proceedings of the Sixteenth European Conference on Computer Systems 2021 • Zhenkun Cai, Xiao Yan, Yidi Wu, Kaihao Ma, James Cheng, Fan Yu

Graph neural networks (GNNs) have gained increasing popularity in many areas such as e-commerce, social networks and bio-informatics.

no code implementations • IEEE Transactions on Parallel and Distributed Systems 2021 • Yidi Wu, Kaihao Ma, Xiao Yan, Zhi Liu, Zhenkun Cai, Yuzhen Huang, James Cheng, Han Yuan, Fan Yu

We study how to support elasticity, that is, the ability to dynamically adjust the parallelism (i. e., the number of GPUs), for deep neural network (DNN) training in a GPU cluster.

no code implementations • 27 Jan 2021 • Kaili Ma, Haochen Yang, Han Yang, Tatiana Jin, Pengfei Chen, Yongqiang Chen, Barakeel Fanseu Kamhoua, James Cheng

Graph representation learning is an important task with applications in various areas such as online social networks, e-commerce networks, WWW, and semantic webs.

no code implementations • 27 Oct 2020 • Yitong Meng, Jie Liu, Xiao Yan, James Cheng

When a new user just signs up on a website, we usually have no information about him/her, i. e. no interaction with items, no user profile and no social links with other users.

1 code implementation • 4 Sep 2020 • Han Yang, Kaili Ma, James Cheng

The graph Laplacian regularization term is usually used in semi-supervised representation learning to provide graph structure information for a model $f(X)$.

1 code implementation • 8 Jun 2020 • Guoji Fu, Yifan Hou, Jian Zhang, Kaili Ma, Barakeel Fanseu Kamhoua, James Cheng

This paper aims to provide a theoretical framework to understand GNNs, specifically, spectral graph convolutional networks and graph attention networks, from graph signal denoising perspectives.

1 code implementation • NeurIPS 2020 • Kaiwen Zhou, Anthony Man-Cho So, James Cheng

Specifically, instead of tackling the original objective directly, we construct a shifted objective function that has the same minimizer as the original objective and encodes both the smoothness and strong convexity of the original objective in an interpolation condition.

1 code implementation • ICLR 2020 • Yifan Hou, Jian Zhang, James Cheng, Kaili Ma, Richard T. B. Ma, Hongzhi Chen, Ming-Chang Yang

Graph neural networks (GNNs) have been widely used for representation learning on graph data.

1 code implementation • 16 Apr 2020 • Zhenkun Cai, Kaihao Ma, Xiao Yan, Yidi Wu, Yuzhen Huang, James Cheng, Teng Su, Fan Yu

A good parallelization strategy can significantly improve the efficiency or reduce the cost for the distributed training of deep neural networks (DNNs).

1 code implementation • 18 Feb 2020 • Han Yang, Xiao Yan, Xinyan Dai, Yongqiang Chen, James Cheng

In this paper, we propose self-enhanced GNN (SEG), which improves the quality of the input data using the outputs of existing GNN models for better performance on semi-supervised node classification.

2 code implementations • 31 Jan 2020 • Xinyan Dai, Xiao Yan, Kaiwen Zhou, Yuxuan Wang, Han Yang, James Cheng

Edit-distance-based string similarity search has many applications such as spell correction, data de-duplication, and sequence alignment.

2 code implementations • 12 Nov 2019 • Xinyan Dai, Xiao Yan, Kelvin K. W. Ng, Jie Liu, James Cheng

In this paper, we present a new angle to analyze the quantization error, which decomposes the quantization error into norm error and direction error.

1 code implementation • 12 Nov 2019 • Xinyan Dai, Xiao Yan, Kaiwen Zhou, Han Yang, Kelvin K. W. Ng, James Cheng, Yu Fan

In particular, at the high compression ratio end, HSQ provides a low per-iteration communication cost of $O(\log d)$, which is favorable for federated learning.

no code implementations • 30 Sep 2019 • Jie Liu, Xiao Yan, Xinyan Dai, Zhirong Li, James Cheng, Ming-Chang Yang

Then we explain the good performance of ip-NSW as matching the norm bias of the MIPS problem - large norm items have big in-degrees in the ip-NSW proximity graph and a walk on the graph spends the majority of computation on these items, thus effectively avoids unnecessary computation on small norm items.

no code implementations • 25 Sep 2019 • Kaiwen Zhou, Yanghua Jin, Qinghua Ding, James Cheng

Stochastic Gradient Descent (SGD) with Nesterov's momentum is a widely used optimizer in deep learning, which is observed to have excellent generalization performance.

no code implementations • 10 Sep 2019 • Yitong Meng, Xinyan Dai, Xiao Yan, James Cheng, Weiwen Liu, Benben Liao, Jun Guo, Guangyong Chen

Collaborative filtering, a widely-used recommendation technique, predicts a user's preference by aggregating the ratings from similar users.

no code implementations • Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining 2019 • Yifan Hou, Hongzhi Chen, Changji Li, James Cheng, Ming-Chang Yang

Representation learning on graphs, also called graph embedding, has demonstrated its significant impact on a series of machine learning applications such as classification, prediction and recommendation.

1 code implementation • 22 Oct 2018 • Xiao Yan, Xinyan Dai, Jie Liu, Kaiwen Zhou, James Cheng

Recently, locality sensitive hashing (LSH) was shown to be effective for MIPS and several algorithms including $L_2$-ALSH, Sign-ALSH and Simple-LSH have been proposed.

no code implementations • 11 Oct 2018 • Fanhua Shang, James Cheng, Yuanyuan Liu, Zhi-Quan Luo, Zhouchen Lin

The heavy-tailed distributions of corrupted outliers and singular values of all channels in low-level vision have proven effective priors for many applications such as background modeling, photometric stereo and image alignment.

no code implementations • 7 Oct 2018 • Fanhua Shang, Licheng Jiao, Kaiwen Zhou, James Cheng, Yan Ren, Yufei Jin

This paper proposes an accelerated proximal stochastic variance reduced gradient (ASVRG) method, in which we design a simple and effective momentum acceleration trick.

1 code implementation • NeurIPS 2018 • Xiao Yan, Jinfeng Li, Xinyan Dai, Hongzhi Chen, James Cheng

Neyshabur and Srebro proposed Simple-LSH, which is the state-of-the-art hashing method for maximum inner product search (MIPS) with performance guarantee.

no code implementations • ICML 2018 • Kaiwen Zhou, Fanhua Shang, James Cheng

Recent years have witnessed exciting progress in the study of stochastic variance reduced gradient methods (e. g., SVRG, SAGA), their accelerated variants (e. g, Katyusha) and their extensions in many different settings (e. g., online, sparse, asynchronous, distributed).

no code implementations • 28 Feb 2018 • Fanhua Shang, Yuanyuan Liu, James Cheng

The Schatten quasi-norm was introduced to bridge the gap between the trace norm and rank function.

no code implementations • 26 Feb 2018 • Fanhua Shang, Yuanyuan Liu, Kaiwen Zhou, James Cheng, Kelvin K. W. Ng, Yuichi Yoshida

In order to make sufficient decrease for stochastic optimization, we design a new sufficient decrease criterion, which yields sufficient decrease versions of stochastic variance reduction algorithms such as SVRG-SD and SAGA-SD as a byproduct.

1 code implementation • 26 Feb 2018 • Fanhua Shang, Kaiwen Zhou, Hongying Liu, James Cheng, Ivor W. Tsang, Lijun Zhang, DaCheng Tao, Licheng Jiao

In this paper, we propose a simple variant of the original SVRG, called variance reduced stochastic gradient descent (VR-SGD).

no code implementations • NeurIPS 2017 • Yuanyuan Liu, Fanhua Shang, James Cheng, Hong Cheng, Licheng Jiao

In this paper, we propose an accelerated first-order method for geodesically convex optimization, which is the generalization of the standard Nesterov's accelerated method from Euclidean space to nonlinear Riemannian space.

no code implementations • 11 Jul 2017 • Yuanyuan Liu, Fanhua Shang, James Cheng

Besides having a low per-iteration complexity as existing stochastic ADMM methods, ASVRG-ADMM improves the convergence rate on general convex problems from O(1/T) to O(1/T^2).

no code implementations • 23 Mar 2017 • Fanhua Shang, Yuanyuan Liu, James Cheng, Jiacheng Zhuo

Recently, research on accelerated stochastic gradient descent methods (e. g., SVRG) has made exciting progress (e. g., linear convergence for strongly convex problems).

no code implementations • 20 Mar 2017 • Fanhua Shang, Yuanyuan Liu, James Cheng, Kelvin Kai Wing Ng, Yuichi Yoshida

In order to make sufficient decrease for stochastic optimization, we design a new sufficient decrease criterion, which yields sufficient decrease versions of variance reduction algorithms such as SVRG-SD and SAGA-SD as a byproduct.

no code implementations • 4 Jun 2016 • Fanhua Shang, Yuanyuan Liu, James Cheng

In this paper, we first define two tractable Schatten quasi-norms, i. e., the Frobenius/nuclear hybrid and bi-nuclear quasi-norms, and then prove that they are in essence the Schatten-2/3 and 1/2 quasi-norms, respectively, which lead to the design of very efficient algorithms that only need to update two much smaller factor matrices.

no code implementations • 2 Jun 2016 • Fanhua Shang, Yuanyuan Liu, James Cheng

In this paper, we rigorously prove that for any p, p1, p2>0 satisfying 1/p=1/p1+1/p2, the Schatten-p quasi-norm of any matrix is equivalent to minimizing the product of the Schatten-p1 norm (or quasi-norm) and Schatten-p2 norm (or quasi-norm) of its two factor matrices.

no code implementations • 26 Dec 2015 • Fanhua Shang, James Cheng, Hong Cheng

We first induce the equivalence relation of the Schatten p-norm (0<p<\infty) of a low multi-linear rank tensor and its core tensor.

no code implementations • NeurIPS 2014 • Yuanyuan Liu, Fanhua Shang, Wei Fan, James Cheng, Hong Cheng

Then the Schatten 1-norm of the core tensor is used to replace that of the whole tensor, which leads to a much smaller-scale matrix SNM problem.

no code implementations • 3 Sep 2014 • Fanhua Shang, Yuanyuan Liu, Hanghang Tong, James Cheng, Hong Cheng

In this paper, we propose a scalable, provable structured low-rank matrix factorization method to recover low-rank and sparse matrices from missing and grossly corrupted data, i. e., robust matrix completion (RMC) problems, or incomplete and grossly corrupted measurements, i. e., compressive principal component pursuit (CPCP) problems.

no code implementations • 5 Jul 2014 • Fanhua Shang, Yuanyuan Liu, James Cheng

To address these problems, we first propose a parallel trace norm regularized tensor decomposition method, and formulate it as a convex optimization problem.

no code implementations • 24 Feb 2014 • Linhong Zhu, Aram Galstyan, James Cheng, Kristina Lerman

We further investigate the evolution of user-level sentiments and latent feature vectors in an online framework and devise an efficient online algorithm to sequentially update the clustering of tweets, users and features with newly arrived data.

4 code implementations • 30 May 2012 • Jia Wang, James Cheng

We first improve the existing in-memory algorithm for computing k-truss in networks of moderate size.

Databases

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.