Search Results for author: Yuyang Deng

Found 10 papers, 3 papers with code

On the Generalization Ability of Unsupervised Pretraining

no code implementations11 Mar 2024 Yuyang Deng, Junyuan Hong, Jiayu Zhou, Mehrdad Mahdavi

Recent advances in unsupervised learning have shown that unsupervised pre-training, followed by fine-tuning, can improve model generalization.

Binary Classification Unsupervised Pre-training

Collaborative Learning with Different Labeling Functions

no code implementations16 Feb 2024 Yuyang Deng, Mingda Qiao

We study a variant of Collaborative PAC Learning, in which we aim to learn an accurate classifier for each of the $n$ data distributions, while minimizing the number of samples drawn from them in total.

Computational Efficiency PAC learning

Early ChatGPT User Portrait through the Lens of Data

no code implementations10 Dec 2023 Yuyang Deng, Ni Zhao, Xin Huang

Since its launch, ChatGPT has achieved remarkable success as a versatile conversational AI platform, drawing millions of users worldwide and garnering widespread recognition across academic, industrial, and general communities.

Understanding Deep Gradient Leakage via Inversion Influence Functions

1 code implementation NeurIPS 2023 Haobo Zhang, Junyuan Hong, Yuyang Deng, Mehrdad Mahdavi, Jiayu Zhou

Deep Gradient Leakage (DGL) is a highly effective attack that recovers private training images from gradient vectors.

On the Hardness of Robustness Transfer: A Perspective from Rademacher Complexity over Symmetric Difference Hypothesis Space

no code implementations23 Feb 2023 Yuyang Deng, Nidham Gazagnadou, Junyuan Hong, Mehrdad Mahdavi, Lingjuan Lyu

Recent studies demonstrated that the adversarially robust learning under $\ell_\infty$ attack is harder to generalize to different domains than standard domain adaptation.

Binary Classification Domain Generalization +1

Tight Analysis of Extra-gradient and Optimistic Gradient Methods For Nonconvex Minimax Problems

no code implementations17 Oct 2022 Pouria Mahdavinia, Yuyang Deng, Haochuan Li, Mehrdad Mahdavi

Despite the established convergence theory of Optimistic Gradient Descent Ascent (OGDA) and Extragradient (EG) methods for the convex-concave minimax problems, little is known about the theoretical guarantees of these methods in nonconvex settings.

Local SGD Optimizes Overparameterized Neural Networks in Polynomial Time

no code implementations22 Jul 2021 Yuyang Deng, Mohammad Mahdi Kamani, Mehrdad Mahdavi

This work is the first to show the convergence of Local SGD on non-smooth functions, and will shed lights on the optimization theory of federated training of deep neural networks.

Distributed Optimization

Distributionally Robust Federated Averaging

1 code implementation NeurIPS 2020 Yuyang Deng, Mohammad Mahdi Kamani, Mehrdad Mahdavi

To compensate for this, we propose a Distributionally Robust Federated Averaging (DRFA) algorithm that employs a novel snapshotting scheme to approximate the accumulation of history gradients of the mixing parameter.

Federated Learning

Local Stochastic Gradient Descent Ascent: Convergence Analysis and Communication Efficiency

no code implementations25 Feb 2021 Yuyang Deng, Mehrdad Mahdavi

Local SGD is a promising approach to overcome the communication overhead in distributed learning by reducing the synchronization frequency among worker nodes.

Adaptive Personalized Federated Learning

9 code implementations30 Mar 2020 Yuyang Deng, Mohammad Mahdi Kamani, Mehrdad Mahdavi

Investigation of the degree of personalization in federated learning algorithms has shown that only maximizing the performance of the global model will confine the capacity of the local models to personalize.

Bilevel Optimization Personalized Federated Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.