1 code implementation • 5 Jun 2024 • Charlie Hou, Akshat Shrivastava, Hongyuan Zhan, Rylan Conway, Trang Le, Adithya Sagar, Giulia Fanti, Daniel Lazar
Altogether, these results suggest that training on DP synthetic data can be a better option than training a model on-device on private distributed data.
no code implementations • 29 Feb 2024 • Shuqi Ke, Charlie Hou, Giulia Fanti, Sewoong Oh
We provide theoretical insights into the convergence of DP fine-tuning within an overparameterized neural network and establish a utility curve that determines the allocation of privacy budget between linear probing and full fine-tuning.
no code implementations • 31 Jul 2023 • Charlie Hou, Kiran Koshy Thekumparampil, Michael Shavlovsky, Giulia Fanti, Yesh Dattatreya, Sujay Sanghavi
On tabular data, a significant body of literature has shown that current deep learning (DL) models perform at best similarly to Gradient Boosted Decision Trees (GBDTs), while significantly underperforming them on outlier data.
no code implementations • 17 Feb 2023 • Charlie Hou, Hongyuan Zhan, Akshat Shrivastava, Sid Wang, Aleksandr Livshits, Giulia Fanti, Daniel Lazar
To this end, we propose FreD (Federated Private Fr\'echet Distance) -- a privately computed distance between a prefinetuning dataset and federated datasets.
no code implementations • ICLR 2022 • Charlie Hou, Kiran K. Thekumparampil, Giulia Fanti, Sewoong Oh
We propose FedChain, an algorithmic framework that combines the strengths of local methods and global methods to achieve fast convergence in terms of R while leveraging the similarity between clients.
no code implementations • 12 Feb 2021 • Charlie Hou, Kiran K. Thekumparampil, Giulia Fanti, Sewoong Oh
Our goal is to design an algorithm that can harness the benefit of similarity in the clients while recovering the Minibatch Mirror-prox performance under arbitrary heterogeneity (up to log factors).
1 code implementation • 4 Dec 2019 • Charlie Hou, Mingxun Zhou, Yan Ji, Phil Daian, Florian Tramer, Giulia Fanti, Ari Juels
Incentive mechanisms are central to the functionality of permissionless blockchains: they incentivize participants to run and secure the underlying consensus protocol.
Cryptography and Security