Search Results for author: Hongchang Gao

Found 18 papers, 1 papers with code

Deep Relational Factorization Machines

no code implementations25 Sep 2019 Hongchang Gao, Gang Wu, Ryan Rossi, Viswanathan Swaminathan, Heng Huang

Factorization Machines (FMs) is an important supervised learning approach due to its unique ability to capture feature interactions when dealing with high-dimensional sparse data.

Adaptive Serverless Learning

no code implementations24 Aug 2020 Hongchang Gao, Heng Huang

To the best of our knowledge, this is the first adaptive decentralized training approach.

Periodic Stochastic Gradient Descent with Momentum for Decentralized Training

no code implementations24 Aug 2020 Hongchang Gao, Heng Huang

The condition for achieving the linear speedup is also provided for this variant.

Delay-Tolerant Local SGD for Efficient Distributed Training

no code implementations1 Jan 2021 An Xu, Xiao Yan, Hongchang Gao, Heng Huang

The heavy communication for model synchronization is a major bottleneck for scaling up the distributed deep neural network training to many workers.

Federated Learning

Fast Training Method for Stochastic Compositional Optimization Problems

no code implementations NeurIPS 2021 Hongchang Gao, Heng Huang

The stochastic compositional optimization problem covers a wide range of machine learning models, such as sparse additive models and model-agnostic meta-learning.

Additive models Meta-Learning

Accelerated Gradient-Free Method for Heavily Constrained Nonconvex Optimization

no code implementations29 Sep 2021 Wanli Shi, Hongchang Gao, Bin Gu

In this paper, to solve the nonconvex problem with a large number of white/black-box constraints, we proposed a doubly stochastic zeroth-order gradient method (DSZOG).

Fast Training Method for Stochastic Compositional Optimization Problems

no code implementations NeurIPS 2021 Hongchang Gao, Heng Huang

The stochastic compositional optimization problem covers a wide range of machine learning models, such as sparse additive models and model-agnostic meta-learning.

Additive models Meta-Learning

On the Convergence of Momentum-Based Algorithms for Federated Bilevel Optimization Problems

no code implementations28 Apr 2022 Hongchang Gao

In this paper, we studied the federated bilevel optimization problem, which has widespread applications in machine learning.

Bilevel Optimization

On the Convergence of Distributed Stochastic Bilevel Optimization Algorithms over a Network

no code implementations30 Jun 2022 Hongchang Gao, Bin Gu, My T. Thai

Bilevel optimization has been applied to a wide variety of machine learning models, and numerous stochastic bilevel optimization algorithms have been developed in recent years.

BIG-bench Machine Learning Bilevel Optimization +1

Decentralized Stochastic Gradient Descent Ascent for Finite-Sum Minimax Problems

no code implementations6 Dec 2022 Hongchang Gao

Minimax optimization problems have attracted significant attention in recent years due to their widespread application in numerous machine learning models.

Stochastic Optimization

When Decentralized Optimization Meets Federated Learning

no code implementations5 Jun 2023 Hongchang Gao, My T. Thai, Jie Wu

Federated learning is a new learning paradigm for extracting knowledge from distributed data.

Federated Learning

Stochastic Multi-Level Compositional Optimization Algorithms over Networks with Level-Independent Convergence Rate

no code implementations6 Jun 2023 Hongchang Gao

To the best of our knowledge, this is the first work that achieves the level-independent convergence rate under the decentralized setting.

Meta-Learning

Achieving Linear Speedup in Decentralized Stochastic Compositional Minimax Optimization

no code implementations25 Jul 2023 Hongchang Gao

In particular, our study shows that the standard gossip communication strategy cannot achieve linear speedup for decentralized compositional minimax problems due to the large consensus error about the inner-level function.

imbalanced classification

On the Communication Complexity of Decentralized Bilevel Optimization

no code implementations19 Nov 2023 Yihan Zhang, My T. Thai, Jie Wu, Hongchang Gao

To the best of our knowledge, this is the first stochastic algorithm achieving these theoretical results under the heterogeneous setting.

Bilevel Optimization

Can Stochastic Zeroth-Order Frank-Wolfe Method Converge Faster for Non-Convex Problems?

no code implementations ICML 2020 Hongchang Gao, Heng Huang

To address the problem of lacking gradient in many applications, we propose two new stochastic zeroth-order Frank-Wolfe algorithms and theoretically proved that they have a faster convergence rate than existing methods for non-convex problems.

Cannot find the paper you are looking for? You can Submit a new open access paper.