Search Results for author: Lê-Nguyên Hoang

Found 4 papers, 2 papers with code

On the Impossible Safety of Large AI Models

no code implementations30 Sep 2022 El-Mahdi El-Mhamdi, Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Lê-Nguyên Hoang, Rafael Pinot, Sébastien Rouault, John Stephan

Large AI Models (LAIMs), of which large language models are the most prominent recent example, showcase some impressive performance.

Privacy Preserving

An Equivalence Between Data Poisoning and Byzantine Gradient Attacks

1 code implementation17 Feb 2022 Sadegh Farhadkhani, Rachid Guerraoui, Lê-Nguyên Hoang, Oscar Villemaud

More specifically, we prove that every gradient attack can be reduced to data poisoning, in any personalized federated learning system with PAC guarantees (which we show are both desirable and realistic).

Data Poisoning Personalized Federated Learning

Strategyproof Learning: Building Trustworthy User-Generated Datasets

1 code implementation4 Jun 2021 Sadegh Farhadkhani, Rachid Guerraoui, Lê-Nguyên Hoang

We prove in this paper that, perhaps surprisingly, incentivizing data misreporting is not a fatality.

Fairness

Cannot find the paper you are looking for? You can Submit a new open access paper.