Search Results for author: Sébastien Rouault

Found 10 papers, 4 papers with code

On the Impossible Safety of Large AI Models

no code implementations30 Sep 2022 El-Mahdi El-Mhamdi, Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Lê-Nguyên Hoang, Rafael Pinot, Sébastien Rouault, John Stephan

Large AI Models (LAIMs), of which large language models are the most prominent recent example, showcase some impressive performance.

Privacy Preserving

Differential Privacy and Byzantine Resilience in SGD: Do They Add Up?

1 code implementation16 Feb 2021 Rachid Guerraoui, Nirupam Gupta, Rafaël Pinot, Sébastien Rouault, John Stephan

This paper addresses the problem of combining Byzantine resilience with privacy in machine learning (ML).

Distributed Momentum for Byzantine-resilient Stochastic Gradient Descent

no code implementations ICLR 2021 El Mahdi El Mhamdi, Rachid Guerraoui, Sébastien Rouault

We propose a practical method which, despite increasing the variance, reduces the variance-norm ratio, mitigating the identified weakness.

Garfield: System Support for Byzantine Machine Learning

1 code implementation12 Oct 2020 Rachid Guerraoui, Arsany Guirguis, Jérémy Max Plassmann, Anton Alexandre Ragot, Sébastien Rouault

We present Garfield, a library to transparently make machine learning (ML) applications, initially built with popular (but fragile) frameworks, e. g., TensorFlow and PyTorch, Byzantine-resilient.

BIG-bench Machine Learning

Distributed Momentum for Byzantine-resilient Learning

1 code implementation28 Feb 2020 El-Mahdi El-Mhamdi, Rachid Guerraoui, Sébastien Rouault

Momentum is a variant of gradient descent that has been proposed for its benefits on convergence.

Genuinely Distributed Byzantine Machine Learning

no code implementations5 May 2019 El-Mahdi El-Mhamdi, Rachid Guerraoui, Arsany Guirguis, Lê Nguyên Hoang, Sébastien Rouault

The third, Minimum-Diameter Averaging (MDA), is a statistically-robust gradient aggregation rule whose goal is to tolerate Byzantine workers.

BIG-bench Machine Learning

Fast and Robust Distributed Learning in High Dimension

no code implementations5 May 2019 El-Mahdi El-Mhamdi, Rachid Guerraoui, Sébastien Rouault

Given $n$ workers, $f$ of which are arbitrary malicious (Byzantine) and $m=n-f$ are not, we prove that multi-Bulyan can ensure a strong form of Byzantine resilience, as well as an ${\frac{m}{n}}$ slowdown, compared to averaging, the fastest (but non Byzantine resilient) rule for distributed machine learning.

BIG-bench Machine Learning Vocal Bursts Intensity Prediction

The Hidden Vulnerability of Distributed Learning in Byzantium

1 code implementation ICML 2018 El Mahdi El Mhamdi, Rachid Guerraoui, Sébastien Rouault

Based on this leeway, we build a simple attack, and experimentally show its strong to utmost effectivity on CIFAR-10 and MNIST.

Cannot find the paper you are looking for? You can Submit a new open access paper.