Search Results for author: Kaiwen Zhou

Found 16 papers, 5 papers with code

An Adaptive Incremental Gradient Method With Support for Non-Euclidean Norms

no code implementations28 Apr 2022 Binghui Xie, Chenhan Jin, Kaiwen Zhou, James Cheng, Wei Meng

Stochastic variance reduced methods have shown strong performance in solving finite-sum problems.

FedVLN: Privacy-preserving Federated Vision-and-Language Navigation

no code implementations28 Mar 2022 Kaiwen Zhou, Xin Eric Wang

Data privacy is a central problem for embodied agents that can perceive the environment, communicate with humans, and act in the real world.

Vision and Language Navigation

Accelerating Perturbed Stochastic Iterates in Asynchronous Lock-Free Optimization

no code implementations30 Sep 2021 Kaiwen Zhou, Anthony Man-Cho So, James Cheng

We show that stochastic acceleration can be achieved under the perturbed iterate framework (Mania et al., 2017) in asynchronous lock-free optimization, which leads to the optimal incremental gradient complexity for finite-sum objectives.

Fast and Reliable Evaluation of Adversarial Robustness with Minimum-Margin Attack

no code implementations29 Sep 2021 Ruize Gao, Jiongxiao Wang, Kaiwen Zhou, Feng Liu, Binghui Xie, Gang Niu, Bo Han, James Cheng

The AutoAttack (AA) has been the most reliable method to evaluate adversarial robustness when considerable computational resources are available.

Adversarial Robustness

Local Reweighting for Adversarial Training

no code implementations30 Jun 2021 Ruize Gao, Feng Liu, Kaiwen Zhou, Gang Niu, Bo Han, James Cheng

However, when tested on attacks different from the given attack simulated in training, the robustness may drop significantly (e. g., even worse than no reweighting).

Practical Schemes for Finding Near-Stationary Points of Convex Finite-Sums

no code implementations NeurIPS 2021 Kaiwen Zhou, Lai Tian, Anthony Man-Cho So, James Cheng

In convex optimization, the problem of finding near-stationary points has not been adequately studied yet, unlike other optimality measures such as the function value.

Boosting First-Order Methods by Shifting Objective: New Schemes with Faster Worst-Case Rates

1 code implementation NeurIPS 2020 Kaiwen Zhou, Anthony Man-Cho So, James Cheng

Specifically, instead of tackling the original objective directly, we construct a shifted objective function that has the same minimizer as the original objective and encodes both the smoothness and strong convexity of the original objective in an interpolation condition.

Convolutional Embedding for Edit Distance

2 code implementations31 Jan 2020 Xinyan Dai, Xiao Yan, Kaiwen Zhou, Yuxuan Wang, Han Yang, James Cheng

Edit-distance-based string similarity search has many applications such as spell correction, data de-duplication, and sequence alignment.

Hyper-Sphere Quantization: Communication-Efficient SGD for Federated Learning

1 code implementation12 Nov 2019 Xinyan Dai, Xiao Yan, Kaiwen Zhou, Han Yang, Kelvin K. W. Ng, James Cheng, Yu Fan

In particular, at the high compression ratio end, HSQ provides a low per-iteration communication cost of $O(\log d)$, which is favorable for federated learning.

Federated Learning Quantization

Amortized Nesterov's Momentum: Robust and Lightweight Momentum for Deep Learning

no code implementations25 Sep 2019 Kaiwen Zhou, Yanghua Jin, Qinghua Ding, James Cheng

Stochastic Gradient Descent (SGD) with Nesterov's momentum is a widely used optimizer in deep learning, which is observed to have excellent generalization performance.

Norm-Range Partition: A Universal Catalyst for LSH based Maximum Inner Product Search (MIPS)

1 code implementation22 Oct 2018 Xiao Yan, Xinyan Dai, Jie Liu, Kaiwen Zhou, James Cheng

Recently, locality sensitive hashing (LSH) was shown to be effective for MIPS and several algorithms including $L_2$-ALSH, Sign-ALSH and Simple-LSH have been proposed.

ASVRG: Accelerated Proximal SVRG

no code implementations7 Oct 2018 Fanhua Shang, Licheng Jiao, Kaiwen Zhou, James Cheng, Yan Ren, Yufei Jin

This paper proposes an accelerated proximal stochastic variance reduced gradient (ASVRG) method, in which we design a simple and effective momentum acceleration trick.

A Simple Stochastic Variance Reduced Algorithm with Fast Convergence Rates

no code implementations ICML 2018 Kaiwen Zhou, Fanhua Shang, James Cheng

Recent years have witnessed exciting progress in the study of stochastic variance reduced gradient methods (e. g., SVRG, SAGA), their accelerated variants (e. g, Katyusha) and their extensions in many different settings (e. g., online, sparse, asynchronous, distributed).

Direct Acceleration of SAGA using Sampled Negative Momentum

no code implementations28 Jun 2018 Kaiwen Zhou

Variance reduction is a simple and effective technique that accelerates convex (or non-convex) stochastic optimization.

Stochastic Optimization

Guaranteed Sufficient Decrease for Stochastic Variance Reduced Gradient Optimization

no code implementations26 Feb 2018 Fanhua Shang, Yuanyuan Liu, Kaiwen Zhou, James Cheng, Kelvin K. W. Ng, Yuichi Yoshida

In order to make sufficient decrease for stochastic optimization, we design a new sufficient decrease criterion, which yields sufficient decrease versions of stochastic variance reduction algorithms such as SVRG-SD and SAGA-SD as a byproduct.

Stochastic Optimization

VR-SGD: A Simple Stochastic Variance Reduction Method for Machine Learning

1 code implementation26 Feb 2018 Fanhua Shang, Kaiwen Zhou, Hongying Liu, James Cheng, Ivor W. Tsang, Lijun Zhang, DaCheng Tao, Licheng Jiao

In this paper, we propose a simple variant of the original SVRG, called variance reduced stochastic gradient descent (VR-SGD).

Cannot find the paper you are looking for? You can Submit a new open access paper.