Search Results for author: Mike Rabbat

Found 11 papers, 4 papers with code

DP-RDM: Adapting Diffusion Models to Private Domains Without Fine-Tuning

no code implementations21 Mar 2024 Jonathan Lebensold, Maziar Sanjabi, Pietro Astolfi, Adriana Romero-Soriano, Kamalika Chaudhuri, Mike Rabbat, Chuan Guo

Text-to-image diffusion models have been shown to suffer from sample-level memorization, possibly reproducing near-perfect replica of images that they are trained on, which may be undesirable.

Memorization Retrieval

Privacy-Aware Compression for Federated Learning Through Numerical Mechanism Design

1 code implementation8 Nov 2022 Chuan Guo, Kamalika Chaudhuri, Pierre Stock, Mike Rabbat

In private federated learning (FL), a server aggregates differentially private updates from a large number of clients in order to train a machine learning model.

Federated Learning

Towards Fair Federated Recommendation Learning: Characterizing the Inter-Dependence of System and Data Heterogeneity

no code implementations30 May 2022 Kiwan Maeng, Haiyu Lu, Luca Melis, John Nguyen, Mike Rabbat, Carole-Jean Wu

Federated learning (FL) is an effective mechanism for data privacy in recommender systems by running machine learning model training on-device.

Fairness Federated Learning +2

Privacy-Aware Compression for Federated Data Analysis

1 code implementation15 Mar 2022 Kamalika Chaudhuri, Chuan Guo, Mike Rabbat

Federated data analytics is a framework for distributed data analysis where a server compiles noisy responses from a group of distributed low-bandwidth user devices to estimate aggregate statistics.

Federated Learning

Papaya: Practical, Private, and Scalable Federated Learning

no code implementations8 Nov 2021 Dzmitry Huba, John Nguyen, Kshitiz Malik, Ruiyu Zhu, Mike Rabbat, Ashkan Yousefpour, Carole-Jean Wu, Hongyuan Zhan, Pavel Ustinov, Harish Srinivas, Kaikai Wang, Anthony Shoumikhin, Jesik Min, Mani Malek

Our work tackles the aforementioned issues, sketches of some of the system design challenges and their solutions, and touches upon principles that emerged from building a production FL system for millions of clients.

Federated Learning

Trade-offs of Local SGD at Scale: An Empirical Study

no code implementations15 Oct 2021 Jose Javier Gonzalez Ortiz, Jonathan Frankle, Mike Rabbat, Ari Morcos, Nicolas Ballas

As datasets and models become increasingly large, distributed training has become a necessary component to allow deep neural networks to train in reasonable amounts of time.

Image Classification

Learning with Gradient Descent and Weakly Convex Losses

no code implementations13 Jan 2021 Dominic Richards, Mike Rabbat

Out of sample guarantees are then achieved by decomposing the test error into generalisation, optimisation and approximation errors, each of which can be bounded and traded off with respect to algorithmic parameters, sample size and magnitude of this eigenvalue.

CPR: Understanding and Improving Failure Tolerant Training for Deep Learning Recommendation with Partial Recovery

no code implementations5 Nov 2020 Kiwan Maeng, Shivam Bharuka, Isabel Gao, Mark C. Jeffrey, Vikram Saraph, Bor-Yiing Su, Caroline Trippel, Jiyan Yang, Mike Rabbat, Brandon Lucia, Carole-Jean Wu

The paper is the first to the extent of our knowledge to perform a data-driven, in-depth analysis of applying partial recovery to recommendation models and identified a trade-off between accuracy and performance.

Cannot find the paper you are looking for? You can Submit a new open access paper.