no code implementations • 28 Apr 2022 • Binghui Xie, Chenhan Jin, Kaiwen Zhou, James Cheng, Wei Meng
Stochastic variance reduced methods have shown strong performance in solving finite-sum problems.
no code implementations • 28 Mar 2022 • Kaiwen Zhou, Xin Eric Wang
Data privacy is a central problem for embodied agents that can perceive the environment, communicate with humans, and act in the real world.
no code implementations • 30 Sep 2021 • Kaiwen Zhou, Anthony Man-Cho So, James Cheng
We show that stochastic acceleration can be achieved under the perturbed iterate framework (Mania et al., 2017) in asynchronous lock-free optimization, which leads to the optimal incremental gradient complexity for finite-sum objectives.
no code implementations • 29 Sep 2021 • Ruize Gao, Jiongxiao Wang, Kaiwen Zhou, Feng Liu, Binghui Xie, Gang Niu, Bo Han, James Cheng
The AutoAttack (AA) has been the most reliable method to evaluate adversarial robustness when considerable computational resources are available.
no code implementations • 30 Jun 2021 • Ruize Gao, Feng Liu, Kaiwen Zhou, Gang Niu, Bo Han, James Cheng
However, when tested on attacks different from the given attack simulated in training, the robustness may drop significantly (e. g., even worse than no reweighting).
no code implementations • NeurIPS 2021 • Kaiwen Zhou, Lai Tian, Anthony Man-Cho So, James Cheng
In convex optimization, the problem of finding near-stationary points has not been adequately studied yet, unlike other optimality measures such as the function value.
1 code implementation • NeurIPS 2020 • Kaiwen Zhou, Anthony Man-Cho So, James Cheng
Specifically, instead of tackling the original objective directly, we construct a shifted objective function that has the same minimizer as the original objective and encodes both the smoothness and strong convexity of the original objective in an interpolation condition.
2 code implementations • 31 Jan 2020 • Xinyan Dai, Xiao Yan, Kaiwen Zhou, Yuxuan Wang, Han Yang, James Cheng
Edit-distance-based string similarity search has many applications such as spell correction, data de-duplication, and sequence alignment.
1 code implementation • 12 Nov 2019 • Xinyan Dai, Xiao Yan, Kaiwen Zhou, Han Yang, Kelvin K. W. Ng, James Cheng, Yu Fan
In particular, at the high compression ratio end, HSQ provides a low per-iteration communication cost of $O(\log d)$, which is favorable for federated learning.
no code implementations • 25 Sep 2019 • Kaiwen Zhou, Yanghua Jin, Qinghua Ding, James Cheng
Stochastic Gradient Descent (SGD) with Nesterov's momentum is a widely used optimizer in deep learning, which is observed to have excellent generalization performance.
1 code implementation • 22 Oct 2018 • Xiao Yan, Xinyan Dai, Jie Liu, Kaiwen Zhou, James Cheng
Recently, locality sensitive hashing (LSH) was shown to be effective for MIPS and several algorithms including $L_2$-ALSH, Sign-ALSH and Simple-LSH have been proposed.
no code implementations • 7 Oct 2018 • Fanhua Shang, Licheng Jiao, Kaiwen Zhou, James Cheng, Yan Ren, Yufei Jin
This paper proposes an accelerated proximal stochastic variance reduced gradient (ASVRG) method, in which we design a simple and effective momentum acceleration trick.
no code implementations • ICML 2018 • Kaiwen Zhou, Fanhua Shang, James Cheng
Recent years have witnessed exciting progress in the study of stochastic variance reduced gradient methods (e. g., SVRG, SAGA), their accelerated variants (e. g, Katyusha) and their extensions in many different settings (e. g., online, sparse, asynchronous, distributed).
no code implementations • 28 Jun 2018 • Kaiwen Zhou
Variance reduction is a simple and effective technique that accelerates convex (or non-convex) stochastic optimization.
no code implementations • 26 Feb 2018 • Fanhua Shang, Yuanyuan Liu, Kaiwen Zhou, James Cheng, Kelvin K. W. Ng, Yuichi Yoshida
In order to make sufficient decrease for stochastic optimization, we design a new sufficient decrease criterion, which yields sufficient decrease versions of stochastic variance reduction algorithms such as SVRG-SD and SAGA-SD as a byproduct.
1 code implementation • 26 Feb 2018 • Fanhua Shang, Kaiwen Zhou, Hongying Liu, James Cheng, Ivor W. Tsang, Lijun Zhang, DaCheng Tao, Licheng Jiao
In this paper, we propose a simple variant of the original SVRG, called variance reduced stochastic gradient descent (VR-SGD).