Search Results for author: Antonious M. Girgis

Found 9 papers, 1 papers with code

Learning from straggler clients in federated learning

no code implementations14 Mar 2024 Andrew Hard, Antonious M. Girgis, Ehsan Amid, Sean Augenstein, Lara McConnaughey, Rajiv Mathews, Rohan Anil

How well do existing federated learning algorithms learn from client devices that return model updates with a significant time delay?

Federated Learning

Multi-Message Shuffled Privacy in Federated Learning

no code implementations22 Feb 2023 Antonious M. Girgis, Suhas Diggavi

This also resolves an open question on the optimal trade-off for private vector sum in the MMS model.

Distributed Optimization Federated Learning +1

Differentially Private Stochastic Linear Bandits: (Almost) for Free

no code implementations7 Jul 2022 Osama A. Hanna, Antonious M. Girgis, Christina Fragouli, Suhas Diggavi

In the shuffled model, we also achieve regret of $\tilde{O}(\sqrt{T}+\frac{1}{\epsilon})$ %for small $\epsilon$ as in the central case, while the best previously known algorithm suffers a regret of $\tilde{O}(\frac{1}{\epsilon}{T^{3/5}})$.

A Generative Framework for Personalized Learning and Estimation: Theory, Algorithms, and Privacy

no code implementations5 Jul 2022 Kaan Ozkara, Antonious M. Girgis, Deepesh Data, Suhas Diggavi

In this work, we begin with a generative framework that could potentially unify several different algorithms as well as suggest new algorithms.

Federated Learning Knowledge Distillation

Renyi Differential Privacy of the Subsampled Shuffle Model in Distributed Learning

no code implementations NeurIPS 2021 Antonious M. Girgis, Deepesh Data, Suhas Diggavi

We study privacy in a distributed learning framework, where clients collaboratively build a learning model iteratively through interactions with a server from whom we need privacy.

Federated Learning Stochastic Optimization

On the Renyi Differential Privacy of the Shuffle Model

no code implementations11 May 2021 Antonious M. Girgis, Deepesh Data, Suhas Diggavi, Ananda Theertha Suresh, Peter Kairouz

The central question studied in this paper is Renyi Differential Privacy (RDP) guarantees for general discrete local mechanisms in the shuffle privacy model.

Shuffled Model of Federated Learning: Privacy, Communication and Accuracy Trade-offs

no code implementations17 Aug 2020 Antonious M. Girgis, Deepesh Data, Suhas Diggavi, Peter Kairouz, Ananda Theertha Suresh

We consider a distributed empirical risk minimization (ERM) optimization problem with communication efficiency and privacy requirements, motivated by the federated learning (FL) framework.

Federated Learning

Successive Refinement of Privacy

no code implementations24 May 2020 Antonious M. Girgis, Deepesh Data, Kamalika Chaudhuri, Christina Fragouli, Suhas Diggavi

This work examines a novel question: how much randomness is needed to achieve local differential privacy (LDP)?

Cannot find the paper you are looking for? You can Submit a new open access paper.