Search Results for author: Zhishuai Guo

Found 9 papers, 3 papers with code

Accelerating Deep Learning with Millions of Classes

no code implementations ECCV 2020 Zhuoning Yuan, Zhishuai Guo, Xiaotian Yu, Xiaoyu Wang, Tianbao Yang

In our experiment, we demonstrate that the proposed frame-work is able to train deep learning models with millions of classes and achieve above 10×speedup compared to existing approaches.

General Classification Representation Learning

A Novel Convergence Analysis for Algorithms of the Adam Family

no code implementations7 Dec 2021 Zhishuai Guo, Yi Xu, Wotao Yin, Rong Jin, Tianbao Yang

Although rigorous convergence analysis exists for Adam, they impose specific requirements on the update of the adaptive step size, which are not generic enough to cover many other variants of Adam.

bilevel optimization

Randomized Stochastic Variance-Reduced Methods for Multi-Task Stochastic Bilevel Optimization

no code implementations5 May 2021 Zhishuai Guo, Quanqi Hu, Lijun Zhang, Tianbao Yang

Although numerous studies have proposed stochastic algorithms for solving these problems, they are limited in two perspectives: (i) their sample complexities are high, which do not match the state-of-the-art result for non-convex stochastic optimization; (ii) their algorithms are tailored to problems with only one lower-level problem.

bilevel optimization Stochastic Optimization

On Stochastic Moving-Average Estimators for Non-Convex Optimization

no code implementations30 Apr 2021 Zhishuai Guo, Yi Xu, Wotao Yin, Rong Jin, Tianbao Yang

In this paper, we consider the widely used but not fully understood stochastic estimator based on moving average (SEMA), which only requires {\bf a general unbiased stochastic oracle}.

bilevel optimization

Federated Deep AUC Maximization for Heterogeneous Data with a Constant Communication Complexity

1 code implementation9 Feb 2021 Zhuoning Yuan, Zhishuai Guo, Yi Xu, Yiming Ying, Tianbao Yang

Deep AUC (area under the ROC curve) Maximization (DAM) has attracted much attention recently due to its great potential for imbalanced data classification.

Federated Learning

An Online Method for A Class of Distributionally Robust Optimization with Non-Convex Objectives

1 code implementation NeurIPS 2021 Qi Qi, Zhishuai Guo, Yi Xu, Rong Jin, Tianbao Yang

In this paper, we propose a practical online method for solving a class of distributionally robust optimization (DRO) with non-convex objectives, which has important applications in machine learning for improving the robustness of neural networks.

Fast Objective & Duality Gap Convergence for Nonconvex-Strongly-Concave Min-Max Problems

no code implementations12 Jun 2020 Zhishuai Guo, Yan Yan, Zhuoning Yuan, Tianbao Yang

Compared with existing studies, (i) our analysis is based on a novel Lyapunov function consisting of the primal objective gap and the duality gap of a regularized function, and (ii) the results are more comprehensive with improved rates that have better dependence on the condition number under different assumptions.

Communication-Efficient Distributed Stochastic AUC Maximization with Deep Neural Networks

1 code implementation ICML 2020 Zhishuai Guo, Mingrui Liu, Zhuoning Yuan, Li Shen, Wei Liu, Tianbao Yang

In this paper, we study distributed algorithms for large-scale AUC maximization with a deep neural network as a predictive model.

Distributed Optimization

Revisiting SGD with Increasingly Weighted Averaging: Optimization and Generalization Perspectives

no code implementations9 Mar 2020 Zhishuai Guo, Yan Yan, Tianbao Yang

It remains unclear how these averaging schemes affect the convergence of {\it both optimization error and generalization error} (two equally important components of testing error) for {\bf non-strongly convex objectives, including non-convex problems}.

Cannot find the paper you are looking for? You can Submit a new open access paper.