no code implementations • 22 Apr 2023 • Yongqiang Chen, Wei Huang, Kaiwen Zhou, Yatao Bian, Bo Han, James Cheng
A common explanation for the failure of out-of-distribution (OOD) generalization is that the model trained with empirical risk minimization (ERM) learns spurious features instead of the desired invariant features.
no code implementations • 30 Jan 2023 • Kaiwen Zhou, Kaizhi Zheng, Connor Pryor, Yilin Shen, Hongxia Jin, Lise Getoor, Xin Eric Wang
Such object navigation tasks usually require large-scale training in visual environments with labeled objects, which generalizes poorly to novel objects in unknown environments.
no code implementations • 27 Nov 2022 • Yunchao Zhang, Zonglin Di, Kaiwen Zhou, Cihang Xie, Xin Eric Wang
However, since the local data is inaccessible to the server under federated learning, attackers may easily poison the training data of the local client to build a backdoor in the agent without notice.
no code implementations • 28 Aug 2022 • Kaizhi Zheng, Kaiwen Zhou, Jing Gu, Yue Fan, Jialu Wang, Zonglin Di, Xuehai He, Xin Eric Wang
Building a conversational embodied agent to execute real-life tasks has been a long-standing yet quite challenging research goal, as it requires effective human-agent communication, multi-modal understanding, long-range sequential decision making, etc.
no code implementations • 27 Jun 2022 • Chenhan Jin, Kaiwen Zhou, Bo Han, Ming-Chang Yang, James Cheng
In this paper, we resolve this issue and derive the first high-probability bounds for the private stochastic method with clipping.
1 code implementation • 15 Jun 2022 • Ruize Gao, Jiongxiao Wang, Kaiwen Zhou, Feng Liu, Binghui Xie, Gang Niu, Bo Han, James Cheng
The AutoAttack (AA) has been the most reliable method to evaluate adversarial robustness when considerable computational resources are available.
2 code implementations • 15 Jun 2022 • Yongqiang Chen, Kaiwen Zhou, Yatao Bian, Binghui Xie, Bingzhe Wu, Yonggang Zhang, Kaili Ma, Han Yang, Peilin Zhao, Bo Han, James Cheng
Recently, there has been a growing surge of interest in enabling machine learning systems to generalize well to Out-of-Distribution (OOD) data.
no code implementations • 28 Apr 2022 • Binghui Xie, Chenhan Jin, Kaiwen Zhou, James Cheng, Wei Meng
Stochastic variance reduced methods have shown strong performance in solving finite-sum problems.
1 code implementation • 28 Mar 2022 • Kaiwen Zhou, Xin Eric Wang
Data privacy is a central problem for embodied agents that can perceive the environment, communicate with humans, and act in the real world.
no code implementations • 30 Sep 2021 • Kaiwen Zhou, Anthony Man-Cho So, James Cheng
We show that stochastic acceleration can be achieved under the perturbed iterate framework (Mania et al., 2017) in asynchronous lock-free optimization, which leads to the optimal incremental gradient complexity for finite-sum objectives.
no code implementations • 30 Jun 2021 • Ruize Gao, Feng Liu, Kaiwen Zhou, Gang Niu, Bo Han, James Cheng
However, when tested on attacks different from the given attack simulated in training, the robustness may drop significantly (e. g., even worse than no reweighting).
no code implementations • NeurIPS 2021 • Kaiwen Zhou, Lai Tian, Anthony Man-Cho So, James Cheng
In convex optimization, the problem of finding near-stationary points has not been adequately studied yet, unlike other optimality measures such as the function value.
1 code implementation • NeurIPS 2020 • Kaiwen Zhou, Anthony Man-Cho So, James Cheng
Specifically, instead of tackling the original objective directly, we construct a shifted objective function that has the same minimizer as the original objective and encodes both the smoothness and strong convexity of the original objective in an interpolation condition.
2 code implementations • 31 Jan 2020 • Xinyan Dai, Xiao Yan, Kaiwen Zhou, Yuxuan Wang, Han Yang, James Cheng
Edit-distance-based string similarity search has many applications such as spell correction, data de-duplication, and sequence alignment.
1 code implementation • 12 Nov 2019 • Xinyan Dai, Xiao Yan, Kaiwen Zhou, Han Yang, Kelvin K. W. Ng, James Cheng, Yu Fan
In particular, at the high compression ratio end, HSQ provides a low per-iteration communication cost of $O(\log d)$, which is favorable for federated learning.
no code implementations • 25 Sep 2019 • Kaiwen Zhou, Yanghua Jin, Qinghua Ding, James Cheng
Stochastic Gradient Descent (SGD) with Nesterov's momentum is a widely used optimizer in deep learning, which is observed to have excellent generalization performance.
1 code implementation • 22 Oct 2018 • Xiao Yan, Xinyan Dai, Jie Liu, Kaiwen Zhou, James Cheng
Recently, locality sensitive hashing (LSH) was shown to be effective for MIPS and several algorithms including $L_2$-ALSH, Sign-ALSH and Simple-LSH have been proposed.
no code implementations • 7 Oct 2018 • Fanhua Shang, Licheng Jiao, Kaiwen Zhou, James Cheng, Yan Ren, Yufei Jin
This paper proposes an accelerated proximal stochastic variance reduced gradient (ASVRG) method, in which we design a simple and effective momentum acceleration trick.
no code implementations • ICML 2018 • Kaiwen Zhou, Fanhua Shang, James Cheng
Recent years have witnessed exciting progress in the study of stochastic variance reduced gradient methods (e. g., SVRG, SAGA), their accelerated variants (e. g, Katyusha) and their extensions in many different settings (e. g., online, sparse, asynchronous, distributed).
no code implementations • 28 Jun 2018 • Kaiwen Zhou
Variance reduction is a simple and effective technique that accelerates convex (or non-convex) stochastic optimization.
1 code implementation • 26 Feb 2018 • Fanhua Shang, Kaiwen Zhou, Hongying Liu, James Cheng, Ivor W. Tsang, Lijun Zhang, DaCheng Tao, Licheng Jiao
In this paper, we propose a simple variant of the original SVRG, called variance reduced stochastic gradient descent (VR-SGD).
no code implementations • 26 Feb 2018 • Fanhua Shang, Yuanyuan Liu, Kaiwen Zhou, James Cheng, Kelvin K. W. Ng, Yuichi Yoshida
In order to make sufficient decrease for stochastic optimization, we design a new sufficient decrease criterion, which yields sufficient decrease versions of stochastic variance reduction algorithms such as SVRG-SD and SAGA-SD as a byproduct.