Search Results for author: Rafael Pinot

Found 21 papers, 3 papers with code

Randomization matters How to defend against strong adversarial attacks

no code implementations ICML 2020 Rafael Pinot, Raphael Ettedgui, Geovani Rizk, Yann Chevaleyre, Jamal Atif

We demonstrate the non-existence of a Nash equilibrium in our game when the classifier and the adversary are both deterministic, hence giving a negative answer to the above question in the deterministic regime.

Tackling Byzantine Clients in Federated Learning

no code implementations20 Feb 2024 Youssef Allouah, Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Rafael Pinot, Geovani Rizk, Sasha Voitovych

The natural approach to robustify FL against adversarial clients is to replace the simple averaging operation at the server in the standard $\mathsf{FedAvg}$ algorithm by a \emph{robust averaging rule}.

Federated Learning Image Classification

SABLE: Secure And Byzantine robust LEarning

no code implementations11 Sep 2023 Antoine Choffrut, Rachid Guerraoui, Rafael Pinot, Renaud Sirdey, John Stephan, Martin Zuber

SABLE leverages HTS, a novel and efficient homomorphic operator implementing the prominent coordinate-wise trimmed mean robust aggregator.

Image Classification Privacy Preserving

On the Privacy-Robustness-Utility Trilemma in Distributed Learning

no code implementations9 Feb 2023 Youssef Allouah, Rachid Guerraoui, Nirupam Gupta, Rafael Pinot, John Stephan

The latter amortizes the dependence on the dimension in the error (caused by adversarial workers and DP), while being agnostic to the statistical properties of the data.

Fixing by Mixing: A Recipe for Optimal Byzantine ML under Heterogeneity

no code implementations3 Feb 2023 Youssef Allouah, Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Rafael Pinot, John Stephan

Byzantine machine learning (ML) aims to ensure the resilience of distributed learning algorithms to misbehaving (or Byzantine) machines.

On the Impossible Safety of Large AI Models

no code implementations30 Sep 2022 El-Mahdi El-Mhamdi, Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Lê-Nguyên Hoang, Rafael Pinot, Sébastien Rouault, John Stephan

Large AI Models (LAIMs), of which large language models are the most prominent recent example, showcase some impressive performance.

Privacy Preserving

Robust Collaborative Learning with Linear Gradient Overhead

1 code implementation22 Sep 2022 Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Lê Nguyên Hoang, Rafael Pinot, John Stephan

We present MoNNA, a new algorithm that (a) is provably robust under standard assumptions and (b) has a gradient computation overhead that is linear in the fraction of faulty machines, which is conjectured to be tight.

Image Classification

Towards Evading the Limits of Randomized Smoothing: A Theoretical Analysis

no code implementations3 Jun 2022 Raphael Ettedgui, Alexandre Araujo, Rafael Pinot, Yann Chevaleyre, Jamal Atif

We first show that these certificates use too little information about the classifier, and are in particular blind to the local curvature of the decision boundary.

Byzantine Machine Learning Made Easy by Resilient Averaging of Momentums

no code implementations24 May 2022 Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Rafael Pinot, John Stephan

We present \emph{RESAM (RESilient Averaging of Momentums)}, a unified framework that makes it simple to establish optimal Byzantine resilience, relying only on standard machine learning assumptions.

BIG-bench Machine Learning Distributed Optimization

Towards Consistency in Adversarial Classification

no code implementations20 May 2022 Laurent Meunier, Raphaël Ettedgui, Rafael Pinot, Yann Chevaleyre, Jamal Atif

In this paper, we expose some pathological behaviors specific to the adversarial problem, and show that no convex surrogate loss can be consistent or calibrated in this context.

Classification

Advocating for Multiple Defense Strategies against Adversarial Examples

no code implementations4 Dec 2020 Alexandre Araujo, Laurent Meunier, Rafael Pinot, Benjamin Negrevergne

It has been empirically observed that defense mechanisms designed to protect neural networks against $\ell_\infty$ adversarial examples offer poor performance against $\ell_2$ adversarial examples and vice versa.

SPEED: Secure, PrivatE, and Efficient Deep learning

no code implementations16 Jun 2020 Arnaud Grivet Sébert, Rafael Pinot, Martin Zuber, Cédric Gouy-Pailler, Renaud Sirdey

Based on collaborative learning, differential privacy and homomorphic encryption, the proposed approach advances state-of-the-art of private deep learning against a wider range of threats, in particular the honest-but-curious server assumption.

Randomization matters. How to defend against strong adversarial attacks

1 code implementation26 Feb 2020 Rafael Pinot, Raphael Ettedgui, Geovani Rizk, Yann Chevaleyre, Jamal Atif

We demonstrate the non-existence of a Nash equilibrium in our game when the classifier and the Adversary are both deterministic, hence giving a negative answer to the above question in the deterministic regime.

A unified view on differential privacy and robustness to adversarial examples

no code implementations19 Jun 2019 Rafael Pinot, Florian Yger, Cédric Gouy-Pailler, Jamal Atif

This short note highlights some links between two lines of research within the emerging topic of trustworthy machine learning: differential privacy and robustness to adversarial examples.

Robust Neural Networks using Randomized Adversarial Training

no code implementations25 Mar 2019 Alexandre Araujo, Laurent Meunier, Rafael Pinot, Benjamin Negrevergne

This paper tackles the problem of defending a neural network against adversarial attacks crafted with different norms (in particular $\ell_\infty$ and $\ell_2$ bounded adversarial examples).

Graph-based Clustering under Differential Privacy

no code implementations10 Mar 2018 Rafael Pinot, Anne Morvan, Florian Yger, Cédric Gouy-Pailler, Jamal Atif

In this paper, we present the first differentially private clustering method for arbitrary-shaped node clusters in a graph.

Clustering

Minimum spanning tree release under differential privacy constraints

no code implementations19 Jan 2018 Rafael Pinot

It provides a simple way of producing the topology of a private almost minimum spanning tree which outperforms, in most cases, the state of the art "Laplace mechanism" in terms of weight-approximation error.

Clustering

Cannot find the paper you are looking for? You can Submit a new open access paper.