Search Results for author: Sai Praneeth Karimireddy

Found 41 papers, 20 papers with code

A Differentially Private Kaplan-Meier Estimator for Privacy-Preserving Survival Analysis

no code implementations6 Dec 2024 Narasimha Raghavan Veeraragavan, Sai Praneeth Karimireddy, Jan Franz Nygård

This paper presents a differentially private approach to Kaplan-Meier estimation that achieves accurate survival probability estimates while safeguarding individual privacy.

Defection-Free Collaboration between Competitors in a Learning System

no code implementations22 Jun 2024 Mariel Werner, Sai Praneeth Karimireddy, Michael I. Jordan

We first examine a fully collaborative scheme in which both firms share their models with each other and show that this leads to a market collapse with the revenues of both firms going to zero.

Collaborative Heterogeneous Causal Inference Beyond Meta-analysis

no code implementations24 Apr 2024 Tianyu Guo, Sai Praneeth Karimireddy, Michael I. Jordan

Instead of adjusting the distribution shift separately, we use weighted propensity score models to collaboratively adjust for the distribution shift.

Causal Inference Density Estimation +1

Scaff-PD: Communication Efficient Fair and Robust Federated Learning

no code implementations25 Jul 2023 Yaodong Yu, Sai Praneeth Karimireddy, Yi Ma, Michael I. Jordan

We present Scaff-PD, a fast and communication-efficient algorithm for distributionally robust federated learning.

Fairness Federated Learning

Provably Personalized and Robust Federated Learning

1 code implementation14 Jun 2023 Mariel Werner, Lie He, Michael Jordan, Martin Jaggi, Sai Praneeth Karimireddy

Identifying clients with similar objectives and learning a model-per-cluster is an intuitive and interpretable approach to personalization in federated learning.

Clustering Personalized Federated Learning +1

Evaluating and Incentivizing Diverse Data Contributions in Collaborative Learning

no code implementations8 Jun 2023 Baihe Huang, Sai Praneeth Karimireddy, Michael I. Jordan

This creates a tension between the principal (the FL platform designer) who cares about global performance and the agents (the data collectors) who care about local performance.

Diversity Federated Learning

Federated Conformal Predictors for Distributed Uncertainty Quantification

1 code implementation27 May 2023 Charles Lu, Yaodong Yu, Sai Praneeth Karimireddy, Michael I. Jordan, Ramesh Raskar

Conformal prediction is emerging as a popular paradigm for providing rigorous uncertainty quantification in machine learning since it can be easily applied as a post-processing step to already trained models.

Conformal Prediction Federated Learning +1

Online Learning in a Creator Economy

no code implementations19 May 2023 Banghua Zhu, Sai Praneeth Karimireddy, Jiantao Jiao, Michael I. Jordan

In this paper, we initiate the study of online learning in the creator economy by modeling the creator economy as a three-party game between the users, platform, and content creators, with the platform interacting with the content creator under a principal-agent model through contracts to encourage better content.

Recommendation Systems

FedEBA+: Towards Fair and Effective Federated Learning via Entropy-Based Model

no code implementations29 Jan 2023 Lin Wang, Zhichao Wang, Sai Praneeth Karimireddy, Xiaoying Tang

Ensuring fairness is a crucial aspect of Federated Learning (FL), which enables the model to perform consistently across all clients.

Fairness Federated Learning

TCT: Convexifying Federated Learning using Bootstrapped Neural Tangent Kernels

1 code implementation13 Jul 2022 Yaodong Yu, Alexander Wei, Sai Praneeth Karimireddy, Yi Ma, Michael I. Jordan

Leveraging this observation, we propose a Train-Convexify-Train (TCT) procedure to sidestep this issue: first, learn features using off-the-shelf methods (e. g., FedAvg); then, optimize a convexified problem obtained from the network's empirical neural tangent kernel approximation.

Federated Learning

Mechanisms that Incentivize Data Sharing in Federated Learning

no code implementations10 Jul 2022 Sai Praneeth Karimireddy, Wenshuo Guo, Michael I. Jordan

Federated learning is typically considered a beneficial technology which allows multiple agents to collaborate with each other, improve the accuracy of their models, and solve problems which are otherwise too data-intensive / expensive to be solved individually.

Federated Learning

Optimization with Access to Auxiliary Information

1 code implementation1 Jun 2022 El Mahdi Chayti, Sai Praneeth Karimireddy

We investigate the fundamental optimization question of minimizing a target function $f$, whose gradients are expensive to compute or have limited availability, given access to some auxiliary side function $h$ whose gradients are cheap or more available.

Federated Learning Transfer Learning

Agree to Disagree: Diversity through Disagreement for Better Transferability

1 code implementation9 Feb 2022 Matteo Pagliardini, Martin Jaggi, François Fleuret, Sai Praneeth Karimireddy

This behavior can hinder the transferability of trained models by (i) favoring the learning of simpler but spurious features -- present in the training data but absent from the test data -- and (ii) by only leveraging a small subset of predictive features.

Diversity Out of Distribution (OOD) Detection

Byzantine-Robust Decentralized Learning via ClippedGossip

1 code implementation3 Feb 2022 Lie He, Sai Praneeth Karimireddy, Martin Jaggi

In this paper, we study the challenging task of Byzantine-robust decentralized training on arbitrary communication graphs.

Federated Learning

Breaking the centralized barrier for cross-device federated learning

no code implementations NeurIPS 2021 Sai Praneeth Karimireddy, Martin Jaggi, Satyen Kale, Mehryar Mohri, Sashank Reddi, Sebastian U. Stich, Ananda Theertha Suresh

Federated learning (FL) is a challenging setting for optimization due to the heterogeneity of the data across different clients which gives rise to the client drift phenomenon.

Federated Learning

Linear Speedup in Personalized Collaborative Learning

1 code implementation10 Nov 2021 El Mahdi Chayti, Sai Praneeth Karimireddy, Sebastian U. Stich, Nicolas Flammarion, Martin Jaggi

Collaborative training can improve the accuracy of a model for a user by trading off the model's bias (introduced by using data from other users who are potentially different) against its variance (due to the limited amount of data on any single user).

Federated Learning Stochastic Optimization

Towards Model Agnostic Federated Learning Using Knowledge Distillation

no code implementations ICLR 2022 Andrei Afonin, Sai Praneeth Karimireddy

Is it possible to design an universal API for federated learning using which an ad-hoc group of data-holders (agents) collaborate with each other and perform federated learning?

Federated Learning Knowledge Distillation

RelaySum for Decentralized Deep Learning on Heterogeneous Data

1 code implementation NeurIPS 2021 Thijs Vogels, Lie He, Anastasia Koloskova, Tao Lin, Sai Praneeth Karimireddy, Sebastian U. Stich, Martin Jaggi

A key challenge, primarily in decentralized deep learning, remains the handling of differences between the workers' local data distributions.

Deep Learning

Quasi-Global Momentum: Accelerating Decentralized Deep Learning on Heterogeneous Data

1 code implementation9 Feb 2021 Tao Lin, Sai Praneeth Karimireddy, Sebastian U. Stich, Martin Jaggi

In this paper, we investigate and identify the limitation of several decentralized optimization algorithms for different degrees of data heterogeneity.

Deep Learning

Learning from History for Byzantine Robust Optimization

1 code implementation18 Dec 2020 Sai Praneeth Karimireddy, Lie He, Martin Jaggi

Secondly, we prove that even if the aggregation rules may succeed in limiting the influence of the attackers in a single round, the attackers can couple their attacks across time eventually leading to divergence.

Federated Learning Stochastic Optimization

Practical Low-Rank Communication Compression in Decentralized Deep Learning

1 code implementation NeurIPS 2020 Thijs Vogels, Sai Praneeth Karimireddy, Martin Jaggi

Lossy gradient compression has become a practical tool to overcome the communication bottleneck in centrally coordinated distributed training of machine learning models.

Deep Learning

Byzantine-Robust Learning on Heterogeneous Datasets via Resampling

no code implementations28 Sep 2020 Lie He, Sai Praneeth Karimireddy, Martin Jaggi

In Byzantine-robust distributed optimization, a central server wants to train a machine learning model over data distributed across multiple workers.

Distributed Optimization

Mime: Mimicking Centralized Stochastic Algorithms in Federated Learning

1 code implementation8 Aug 2020 Sai Praneeth Karimireddy, Martin Jaggi, Satyen Kale, Mehryar Mohri, Sashank J. Reddi, Sebastian U. Stich, Ananda Theertha Suresh

Federated learning (FL) is a challenging setting for optimization due to the heterogeneity of the data across different clients which gives rise to the client drift phenomenon.

Federated Learning

PowerGossip: Practical Low-Rank Communication Compression in Decentralized Deep Learning

1 code implementation4 Aug 2020 Thijs Vogels, Sai Praneeth Karimireddy, Martin Jaggi

Lossy gradient compression has become a practical tool to overcome the communication bottleneck in centrally coordinated distributed training of machine learning models.

Deep Learning

Byzantine-Robust Learning on Heterogeneous Datasets via Bucketing

1 code implementation ICLR 2022 Sai Praneeth Karimireddy, Lie He, Martin Jaggi

In Byzantine robust distributed or federated learning, a central server wants to train a machine learning model over data distributed across multiple workers.

Distributed Optimization Federated Learning

Secure Byzantine-Robust Machine Learning

no code implementations8 Jun 2020 Lie He, Sai Praneeth Karimireddy, Martin Jaggi

Increasingly machine learning systems are being deployed to edge servers and devices (e. g. mobile phones) and trained in a collaborative manner.

BIG-bench Machine Learning

Why are Adaptive Methods Good for Attention Models?

no code implementations NeurIPS 2020 Jingzhao Zhang, Sai Praneeth Karimireddy, Andreas Veit, Seungyeon Kim, Sashank J. Reddi, Sanjiv Kumar, Suvrit Sra

While stochastic gradient descent (SGD) is still the \emph{de facto} algorithm in deep learning, adaptive methods like Clipped SGD/Adam have been observed to outperform SGD across important tasks, such as attention models.

SCAFFOLD: Stochastic Controlled Averaging for Federated Learning

7 code implementations ICML 2020 Sai Praneeth Karimireddy, Satyen Kale, Mehryar Mohri, Sashank J. Reddi, Sebastian U. Stich, Ananda Theertha Suresh

We obtain tight convergence rates for FedAvg and prove that it suffers from `client-drift' when the data is heterogeneous (non-iid), resulting in unstable and slow convergence.

Distributed Optimization Federated Learning

Why ADAM Beats SGD for Attention Models

no code implementations25 Sep 2019 Jingzhao Zhang, Sai Praneeth Karimireddy, Andreas Veit, Seungyeon Kim, Sashank J Reddi, Sanjiv Kumar, Suvrit Sra

While stochastic gradient descent (SGD) is still the de facto algorithm in deep learning, adaptive methods like Adam have been observed to outperform SGD across important tasks, such as attention models.

The Error-Feedback Framework: Better Rates for SGD with Delayed Gradients and Compressed Communication

no code implementations11 Sep 2019 Sebastian U. Stich, Sai Praneeth Karimireddy

We analyze (stochastic) gradient descent (SGD) with delayed updates on smooth quasi-convex and non-convex functions and derive concise, non-asymptotic, convergence rates.

Amplifying Rényi Differential Privacy via Shuffling

no code implementations11 Jul 2019 Eloïse Berthier, Sai Praneeth Karimireddy

Differential privacy is a useful tool to build machine learning models which do not release too much information about the training data.

BIG-bench Machine Learning

PowerSGD: Practical Low-Rank Gradient Compression for Distributed Optimization

1 code implementation NeurIPS 2019 Thijs Vogels, Sai Praneeth Karimireddy, Martin Jaggi

We study gradient compression methods to alleviate the communication bottleneck in data-parallel distributed optimization.

Distributed Optimization

Accelerating Gradient Boosting Machine

1 code implementation20 Mar 2019 Haihao Lu, Sai Praneeth Karimireddy, Natalia Ponomareva, Vahab Mirrokni

This is the first GBM type of algorithm with theoretically-justified accelerated convergence rate.

Efficient Greedy Coordinate Descent for Composite Problems

no code implementations16 Oct 2018 Sai Praneeth Karimireddy, Anastasia Koloskova, Sebastian U. Stich, Martin Jaggi

For these problems we provide (i) the first linear rates of convergence independent of $n$, and show that our greedy update rule provides speedups similar to those obtained in the smooth case.

Global linear convergence of Newton's method without strong-convexity or Lipschitz gradients

no code implementations1 Jun 2018 Sai Praneeth Karimireddy, Sebastian U. Stich, Martin Jaggi

We show that Newton's method converges globally at a linear rate for objective functions whose Hessians are stable.

regression

On Matching Pursuit and Coordinate Descent

no code implementations ICML 2018 Francesco Locatello, Anant Raj, Sai Praneeth Karimireddy, Gunnar Rätsch, Bernhard Schölkopf, Sebastian U. Stich, Martin Jaggi

Exploiting the connection between the two algorithms, we present a unified analysis of both, providing affine invariant sublinear $\mathcal{O}(1/t)$ rates on smooth objectives and linear convergence on strongly convex objectives.

Cannot find the paper you are looking for? You can Submit a new open access paper.