Search Results for author: Honglin Yuan

Found 9 papers, 6 papers with code

On Principled Local Optimization Methods for Federated Learning

no code implementations24 Jan 2024 Honglin Yuan

Local optimization methods such as Federated Averaging (FedAvg) are the most prominent methods for FL applications.

Federated Learning

Sharp Bounds for Federated Averaging (Local SGD) and Continuous Perspective

1 code implementation5 Nov 2021 Margalit Glasgow, Honglin Yuan, Tengyu Ma

In this work, we first resolve this question by providing a lower bound for FedAvg that matches the existing upper bound, which shows the existing FedAvg upper bound analysis is not improvable.

Federated Learning

Big-Step-Little-Step: Efficient Gradient Methods for Objectives with Multiple Scales

no code implementations4 Nov 2021 Jonathan Kelner, Annie Marsden, Vatsal Sharan, Aaron Sidford, Gregory Valiant, Honglin Yuan

We consider the problem of minimizing a function $f : \mathbb{R}^d \rightarrow \mathbb{R}$ which is implicitly decomposable as the sum of $m$ unknown non-interacting smooth, strongly convex functions and provide a method which solves this problem with a number of gradient evaluations that scales (up to logarithmic factors) as the product of the square-root of the condition numbers of the components.

What Do We Mean by Generalization in Federated Learning?

1 code implementation ICLR 2022 Honglin Yuan, Warren Morningstar, Lin Ning, Karan Singhal

Thus generalization studies in federated learning should separate performance gaps from unseen client data (out-of-sample gap) from performance gaps from unseen client distributions (participation gap).

Federated Learning

Federated Composite Optimization

1 code implementation17 Nov 2020 Honglin Yuan, Manzil Zaheer, Sashank Reddi

We first show that straightforward extensions of primal algorithms such as FedAvg are not well-suited for FCO since they suffer from the "curse of primal averaging," resulting in poor convergence.

Federated Learning

SHREC 2020 track: 6D Object Pose Estimation

no code implementations19 Oct 2020 Honglin Yuan, Remco C. Veltkamp, Georgios Albanis, Nikolaos Zioulis, Dimitrios Zarpalas, Petros Daras

From captured color and depth images, we use this simulator to generate a 3D dataset which has 400 photo-realistic synthesized color-and-depth image pairs with various view angles for training, and another 100 captured and synthetic images for testing.

6D Pose Estimation 6D Pose Estimation using RGB +3

Federated Accelerated Stochastic Gradient Descent

1 code implementation NeurIPS 2020 Honglin Yuan, Tengyu Ma

We propose Federated Accelerated Stochastic Gradient Descent (FedAc), a principled acceleration of Federated Averaging (FedAvg, also known as Local SGD) for distributed optimization.

Distributed Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.