Search Results for author: Zebang Shen

Found 22 papers, 4 papers with code

Share Your Representation Only: Guaranteed Improvement of the Privacy-Utility Tradeoff in Federated Learning

1 code implementation11 Sep 2023 Zebang Shen, Jiayuan Ye, Anmin Kang, Hamed Hassani, Reza Shokri

Repeated parameter sharing in federated learning causes significant information leakage about private data, thus defeating its main purpose: data privacy.

Federated Learning Image Classification +1

Accelerated Doubly Stochastic Gradient Algorithm for Large-scale Empirical Risk Minimization

no code implementations23 Apr 2023 Zebang Shen, Hui Qian, Tongzhou Mu, Chao Zhang

Nowadays, algorithms with fast convergence, small memory footprints, and low per-iteration complexity are particularly favorable for artificial intelligence applications.

Straggler-Resilient Personalized Federated Learning

1 code implementation5 Jun 2022 Isidoros Tziotis, Zebang Shen, Ramtin Pedarsani, Hamed Hassani, Aryan Mokhtari

Federated Learning is an emerging learning paradigm that allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.

Learning Theory Personalized Federated Learning +1

Self-Consistency of the Fokker-Planck Equation

1 code implementation2 Jun 2022 Zebang Shen, Zhenfu Wang, Satyen Kale, Alejandro Ribeiro, Amin Karbasi, Hamed Hassani

In this paper, we exploit this concept to design a potential function of the hypothesis velocity fields, and prove that, if such a function diminishes to zero during the training procedure, the trajectory of the densities generated by the hypothesis velocity fields converges to the solution of the FPE in the Wasserstein-2 sense.

An Agnostic Approach to Federated Learning with Class Imbalance

no code implementations ICLR 2022 Zebang Shen, Juan Cervino, Hamed Hassani, Alejandro Ribeiro

Federated Learning (FL) has emerged as the tool of choice for training deep models over heterogeneous and decentralized datasets.

Federated Learning

CDMA: A Practical Cross-Device Federated Learning Algorithm for General Minimax Problems

1 code implementation29 May 2021 Jiahao Xie, Chao Zhang, Zebang Shen, Weijie Liu, Hui Qian

We establish theoretical guarantees of CDMA under different choices of hyperparameters and conduct experiments on AUC maximization, robust adversarial network training, and GAN training tasks.

Federated Learning Generative Adversarial Network

Federated Functional Gradient Boosting

no code implementations11 Mar 2021 Zebang Shen, Hamed Hassani, Satyen Kale, Amin Karbasi

First, in the semi-heterogeneous setting, when the marginal distributions of the feature vectors on client machines are identical, we develop the federated functional gradient boosting (FFGB) method that provably converges to the global minimum.

Federated Learning

Partial Gromov-Wasserstein Learning for Partial Graph Matching

no code implementations2 Dec 2020 Weijie Liu, Chao Zhang, Jiahao Xie, Zebang Shen, Hui Qian, Nenggan Zheng

Graph matching finds the correspondence of nodes across two graphs and is a basic task in graph-based machine learning.

Graph Matching

Sinkhorn Natural Gradient for Generative Models

no code implementations NeurIPS 2020 Zebang Shen, Zhenfu Wang, Alejandro Ribeiro, Hamed Hassani

In this regard, we propose a novel Sinkhorn Natural Gradient (SiNG) algorithm which acts as a steepest descent method on the probability space endowed with the Sinkhorn divergence.

Sinkhorn Barycenter via Functional Gradient Descent

no code implementations NeurIPS 2020 Zebang Shen, Zhenfu Wang, Alejandro Ribeiro, Hamed Hassani

In this paper, we consider the problem of computing the barycenter of a set of probability distributions under the Sinkhorn divergence.

Safe Learning under Uncertain Objectives and Constraints

no code implementations23 Jun 2020 Mohammad Fereydounian, Zebang Shen, Aryan Mokhtari, Amin Karbasi, Hamed Hassani

More precisely, by assuming that Reliable-FW has access to a (stochastic) gradient oracle of the objective function and a noisy feasibility oracle of the safety polytope, it finds an $\epsilon$-approximate first-order stationary point with the optimal ${\mathcal{O}}({1}/{\epsilon^2})$ gradient oracle complexity (resp.

Stochastic Continuous Greedy ++: When Upper and Lower Bounds Match

no code implementations NeurIPS 2019 Amin Karbasi, Hamed Hassani, Aryan Mokhtari, Zebang Shen

Concretely, for a monotone and continuous DR-submodular function, \SCGPP achieves a tight $[(1-1/e)\OPT -\epsilon]$ solution while using $O(1/\epsilon^2)$ stochastic gradients and $O(1/\epsilon)$ calls to the linear optimization oracle.

A Decentralized Proximal Point-type Method for Saddle Point Problems

no code implementations31 Oct 2019 Weijie Liu, Aryan Mokhtari, Asuman Ozdaglar, Sarath Pattathil, Zebang Shen, Nenggan Zheng

In this paper, we focus on solving a class of constrained non-convex non-concave saddle point problems in a decentralized manner by a group of nodes in a network.

Vocal Bursts Type Prediction

Aggregated Gradient Langevin Dynamics

no code implementations21 Oct 2019 Chao Zhang, Jiahao Xie, Zebang Shen, Peilin Zhao, Tengfei Zhou, Hui Qian

In this paper, we explore a general Aggregated Gradient Langevin Dynamics framework (AGLD) for the Markov Chain Monte Carlo (MCMC) sampling.

Efficient Projection-Free Online Methods with Stochastic Recursive Gradient

no code implementations21 Oct 2019 Jiahao Xie, Zebang Shen, Chao Zhang, Boyu Wang, Hui Qian

This paper focuses on projection-free methods for solving smooth Online Convex Optimization (OCO) problems.

One Sample Stochastic Frank-Wolfe

no code implementations10 Oct 2019 Mingrui Zhang, Zebang Shen, Aryan Mokhtari, Hamed Hassani, Amin Karbasi

One of the beauties of the projected gradient descent method lies in its rather simple mechanism and yet stable behavior with inexact, stochastic gradients, which has led to its wide-spread use in many machine learning applications.

A Stochastic Trust Region Method for Non-convex Minimization

no code implementations ICLR 2020 Zebang Shen, Pan Zhou, Cong Fang, Alejandro Ribeiro

We target the problem of finding a local minimum in non-convex finite-sum minimization.

Stochastic Conditional Gradient++

no code implementations19 Feb 2019 Hamed Hassani, Amin Karbasi, Aryan Mokhtari, Zebang Shen

It is known that this rate is optimal in terms of stochastic gradient evaluations.

Stochastic Optimization

Accelerated Variance Reduced Block Coordinate Descent

no code implementations13 Nov 2016 Zebang Shen, Hui Qian, Chao Zhang, Tengfei Zhou

Algorithms with fast convergence, small number of data access, and low per-iteration complexity are particularly favorable in the big data era, due to the demand for obtaining \emph{highly accurate solutions} to problems with \emph{a large number of samples} in \emph{ultra-high} dimensional space.

Riemannian Tensor Completion with Side Information

no code implementations12 Nov 2016 Tengfei Zhou, Hui Qian, Zebang Shen, Congfu Xu

By restricting the iterate on a nonlinear manifold, the recently proposed Riemannian optimization methods prove to be both efficient and effective in low rank tensor completion problems.

Riemannian optimization

Kinetic Energy Plus Penalty Functions for Sparse Estimation

no code implementations22 Jul 2013 Zhihua Zhang, Shibo Zhao, Zebang Shen, Shuchang Zhou

In this paper we propose and study a family of sparsity-inducing penalty functions.

Cannot find the paper you are looking for? You can Submit a new open access paper.