Search Results for author: Charlie Hou

Found 7 papers, 2 papers with code

PrE-Text: Training Language Models on Private Federated Data in the Age of LLMs

1 code implementation5 Jun 2024 Charlie Hou, Akshat Shrivastava, Hongyuan Zhan, Rylan Conway, Trang Le, Adithya Sagar, Giulia Fanti, Daniel Lazar

Altogether, these results suggest that training on DP synthetic data can be a better option than training a model on-device on private distributed data.

Language Modelling Large Language Model

On the Convergence of Differentially-Private Fine-tuning: To Linearly Probe or to Fully Fine-tune?

no code implementations29 Feb 2024 Shuqi Ke, Charlie Hou, Giulia Fanti, Sewoong Oh

We provide theoretical insights into the convergence of DP fine-tuning within an overparameterized neural network and establish a utility curve that determines the allocation of privacy budget between linear probing and full fine-tuning.

Pretrained deep models outperform GBDTs in Learning-To-Rank under label scarcity

no code implementations31 Jul 2023 Charlie Hou, Kiran Koshy Thekumparampil, Michael Shavlovsky, Giulia Fanti, Yesh Dattatreya, Sujay Sanghavi

On tabular data, a significant body of literature has shown that current deep learning (DL) models perform at best similarly to Gradient Boosted Decision Trees (GBDTs), while significantly underperforming them on outlier data.

Learning-To-Rank

Privately Customizing Prefinetuning to Better Match User Data in Federated Learning

no code implementations17 Feb 2023 Charlie Hou, Hongyuan Zhan, Akshat Shrivastava, Sid Wang, Aleksandr Livshits, Giulia Fanti, Daniel Lazar

To this end, we propose FreD (Federated Private Fr\'echet Distance) -- a privately computed distance between a prefinetuning dataset and federated datasets.

Federated Learning Language Modelling +2

FedChain: Chained Algorithms for Near-Optimal Communication Cost in Federated Learning

no code implementations ICLR 2022 Charlie Hou, Kiran K. Thekumparampil, Giulia Fanti, Sewoong Oh

We propose FedChain, an algorithmic framework that combines the strengths of local methods and global methods to achieve fast convergence in terms of R while leveraging the similarity between clients.

Federated Learning Image Classification

Efficient Algorithms for Federated Saddle Point Optimization

no code implementations12 Feb 2021 Charlie Hou, Kiran K. Thekumparampil, Giulia Fanti, Sewoong Oh

Our goal is to design an algorithm that can harness the benefit of similarity in the clients while recovering the Minibatch Mirror-prox performance under arbitrary heterogeneity (up to log factors).

SquirRL: Automating Attack Discovery on Blockchain Incentive Mechanisms with Deep Reinforcement Learning

1 code implementation4 Dec 2019 Charlie Hou, Mingxun Zhou, Yan Ji, Phil Daian, Florian Tramer, Giulia Fanti, Ari Juels

Incentive mechanisms are central to the functionality of permissionless blockchains: they incentivize participants to run and secure the underlying consensus protocol.

Cryptography and Security

Cannot find the paper you are looking for? You can Submit a new open access paper.