Search Results for author: Rachid Guerraoui

Found 46 papers, 17 papers with code

Tackling Byzantine Clients in Federated Learning

no code implementations20 Feb 2024 Youssef Allouah, Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Rafael Pinot, Geovani Rizk, Sasha Voitovych

The natural approach to robustify FL against adversarial clients is to replace the simple averaging operation at the server in the standard $\mathsf{FedAvg}$ algorithm by a \emph{robust averaging rule}.

Federated Learning Image Classification

Robustness, Efficiency, or Privacy: Pick Two in Machine Learning

no code implementations22 Dec 2023 Youssef Allouah, Rachid Guerraoui, John Stephan

The success of machine learning (ML) applications relies on vast datasets and distributed architectures which, as they grow, present major challenges.

Computational Efficiency Data Poisoning

Epidemic Learning: Boosting Decentralized Learning with Randomized Communication

1 code implementation NeurIPS 2023 Martijn de Vos, Sadegh Farhadkhani, Rachid Guerraoui, Anne-Marie Kermarrec, Rafael Pires, Rishi Sharma

We present Epidemic Learning (EL), a simple yet powerful decentralized learning (DL) algorithm that leverages changing communication topologies to achieve faster model convergence compared to conventional DL approaches.

SABLE: Secure And Byzantine robust LEarning

no code implementations11 Sep 2023 Antoine Choffrut, Rachid Guerraoui, Rafael Pinot, Renaud Sirdey, John Stephan, Martin Zuber

SABLE leverages HTS, a novel and efficient homomorphic operator implementing the prominent coordinate-wise trimmed mean robust aggregator.

Image Classification Privacy Preserving

Evolutionary Algorithms in the Light of SGD: Limit Equivalence, Minima Flatness, and Transfer Learning

no code implementations20 May 2023 Andrei Kucharavy, Rachid Guerraoui, Ljiljana Dolamic

In this paper, we show that a class of evolutionary algorithms (EAs) inspired by the Gillespie-Orr Mutational Landscapes model for natural evolution is formally equivalent to SGD in certain settings and, in practice, is well adapted to large ANNs.

Evolutionary Algorithms Transfer Learning

Byzantine-Resilient Learning Beyond Gradients: Distributing Evolutionary Search

no code implementations20 Apr 2023 Andrei Kucharavy, Matteo Monti, Rachid Guerraoui, Ljiljana Dolamic

We then leverage this definition to show that a general class of gradient-free ML algorithms - ($1,\lambda$)-Evolutionary Search - can be combined with classical distributed consensus algorithms to generate gradient-free byzantine-resilient distributed learning algorithms.

Stochastic Parrots Looking for Stochastic Parrots: LLMs are Easy to Fine-Tune and Hard to Detect with other LLMs

1 code implementation18 Apr 2023 Da Silva Gameiro Henrique, Andrei Kucharavy, Rachid Guerraoui

This prominence amplified prior concerns regarding the misuse of LLMs and led to the emergence of numerous tools to detect LLMs in the wild.

On the Privacy-Robustness-Utility Trilemma in Distributed Learning

no code implementations9 Feb 2023 Youssef Allouah, Rachid Guerraoui, Nirupam Gupta, Rafael Pinot, John Stephan

The latter amortizes the dependence on the dimension in the error (caused by adversarial workers and DP), while being agnostic to the statistical properties of the data.

Fixing by Mixing: A Recipe for Optimal Byzantine ML under Heterogeneity

no code implementations3 Feb 2023 Youssef Allouah, Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Rafael Pinot, John Stephan

Byzantine machine learning (ML) aims to ensure the resilience of distributed learning algorithms to misbehaving (or Byzantine) machines.

Accelerating Transfer Learning with Near-Data Computation on Cloud Object Stores

no code implementations16 Oct 2022 Arsany Guirguis, Diana Petrescu, Florin Dinu, Do Le Quoc, Javier Picorel, Rachid Guerraoui

This facilitates our second technique, storage-side batch adaptation, which enables increased concurrency in the storage tier while avoiding out-of-memory errors.

Transfer Learning

On the Impossible Safety of Large AI Models

no code implementations30 Sep 2022 El-Mahdi El-Mhamdi, Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Lê-Nguyên Hoang, Rafael Pinot, Sébastien Rouault, John Stephan

Large AI Models (LAIMs), of which large language models are the most prominent recent example, showcase some impressive performance.

Privacy Preserving

Robust Collaborative Learning with Linear Gradient Overhead

1 code implementation22 Sep 2022 Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Lê Nguyên Hoang, Rafael Pinot, John Stephan

We present MoNNA, a new algorithm that (a) is provably robust under standard assumptions and (b) has a gradient computation overhead that is linear in the fraction of faulty machines, which is conjectured to be tight.

Image Classification

Byzantine Machine Learning Made Easy by Resilient Averaging of Momentums

no code implementations24 May 2022 Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Rafael Pinot, John Stephan

We present \emph{RESAM (RESilient Averaging of Momentums)}, a unified framework that makes it simple to establish optimal Byzantine resilience, relying only on standard machine learning assumptions.

BIG-bench Machine Learning Distributed Optimization

An Equivalence Between Data Poisoning and Byzantine Gradient Attacks

1 code implementation17 Feb 2022 Sadegh Farhadkhani, Rachid Guerraoui, Lê-Nguyên Hoang, Oscar Villemaud

More specifically, we prove that every gradient attack can be reduced to data poisoning, in any personalized federated learning system with PAC guarantees (which we show are both desirable and realistic).

Data Poisoning Personalized Federated Learning

Evolutionary perspective on model fine-tuning

no code implementations29 Sep 2021 Andrei Kucharavy, Ljiljana Dolamic, Rachid Guerraoui

Be it in natural language generation or in the image generation, massive performances gains have been achieved in the last years.

BIG-bench Machine Learning Image Generation +1

Strategyproof Learning: Building Trustworthy User-Generated Datasets

1 code implementation4 Jun 2021 Sadegh Farhadkhani, Rachid Guerraoui, Lê-Nguyên Hoang

We prove in this paper that, perhaps surprisingly, incentivizing data misreporting is not a fatality.

Fairness

Differential Privacy and Byzantine Resilience in SGD: Do They Add Up?

1 code implementation16 Feb 2021 Rachid Guerraoui, Nirupam Gupta, Rafaël Pinot, Sébastien Rouault, John Stephan

This paper addresses the problem of combining Byzantine resilience with privacy in machine learning (ML).

Distributed Momentum for Byzantine-resilient Stochastic Gradient Descent

no code implementations ICLR 2021 El Mahdi El Mhamdi, Rachid Guerraoui, Sébastien Rouault

We propose a practical method which, despite increasing the variance, reduces the variance-norm ratio, mitigating the identified weakness.

Microsecond Consensus for Microsecond Applications

1 code implementation13 Oct 2020 Marcos K. Aguilera, Naama Ben-David, Rachid Guerraoui, Virendra J. Marathe, Athanasios Xygkis, Igor Zablotchi

We propose Mu, a system that takes less than 1. 3 microseconds to replicate a (small) request in memory, and less than a millisecond to fail-over the system - this cuts the replication and fail-over latencies of the prior systems by at least 61% and 90%.

Distributed, Parallel, and Cluster Computing

Garfield: System Support for Byzantine Machine Learning

1 code implementation12 Oct 2020 Rachid Guerraoui, Arsany Guirguis, Jérémy Max Plassmann, Anton Alexandre Ragot, Sébastien Rouault

We present Garfield, a library to transparently make machine learning (ML) applications, initially built with popular (but fragile) frameworks, e. g., TensorFlow and PyTorch, Byzantine-resilient.

BIG-bench Machine Learning

Efficient Multi-word Compare and Swap

1 code implementation6 Aug 2020 Rachid Guerraoui, Alex Kogan, Virendra J. Marathe, Igor Zablotchi

Then we present the first algorithm that requires k+1 CASes per call to k-CAS in the common uncontended case.

Distributed, Parallel, and Cluster Computing

FLeet: Online Federated Learning via Staleness Awareness and Performance Prediction

no code implementations12 Jun 2020 Georgios Damaskinos, Rachid Guerraoui, Anne-Marie Kermarrec, Vlad Nitu, Rhicheek Patra, Francois Taiani

Federated Learning (FL) is very appealing for its privacy benefits: essentially, a global model is trained with updates computed on mobile devices while keeping the data of users local.

Federated Learning

Differentially Private Stochastic Coordinate Descent

no code implementations12 Jun 2020 Georgios Damaskinos, Celestine Mendler-Dünner, Rachid Guerraoui, Nikolaos Papandreou, Thomas Parnell

In this paper we tackle the challenge of making the stochastic coordinate descent algorithm differentially private.

Host-Pathongen Co-evolution Inspired Algorithm Enables Robust GAN Training

1 code implementation22 May 2020 Andrei Kucharavy, El Mahdi El Mhamdi, Rachid Guerraoui

Generative adversarial networks (GANs) are pairs of artificial neural networks that are trained one against each other.

Distributed Momentum for Byzantine-resilient Learning

1 code implementation28 Feb 2020 El-Mahdi El-Mhamdi, Rachid Guerraoui, Sébastien Rouault

Momentum is a variant of gradient descent that has been proposed for its benefits on convergence.

Fast Machine Learning with Byzantine Workers and Servers

no code implementations18 Nov 2019 El Mahdi El Mhamdi, Rachid Guerraoui, Arsany Guirguis

We moreover show that the throughput gain of LiuBei compared to another state-of-the-art Byzantine-resilient ML algorithm (that assumes network asynchrony) is 70%.

BIG-bench Machine Learning

The Consensus Number of a Cryptocurrency (Extended Version)

1 code implementation13 Jun 2019 Rachid Guerraoui, Petr Kuznetsov, Matteo Monti, Matej Pavlovic, Dragos-Adrian Seredinschi

As stated in the original paper by Nakamoto, at the heart of these systems lies the problem of preventing double-spending; this is usually solved by achieving consensus on the order of transfers among the participants.

Distributed, Parallel, and Cluster Computing

The Impact of RDMA on Agreement

no code implementations29 May 2019 Marcos K. Aguilera, Naama Ben-David, Rachid Guerraoui, Virendra Marathe, Igor Zablotchi

This technology allows a process to directly read and write the memory of a remote host, with a mechanism to control access permissions.

Distributed, Parallel, and Cluster Computing Data Structures and Algorithms

Genuinely Distributed Byzantine Machine Learning

no code implementations5 May 2019 El-Mahdi El-Mhamdi, Rachid Guerraoui, Arsany Guirguis, Lê Nguyên Hoang, Sébastien Rouault

The third, Minimum-Diameter Averaging (MDA), is a statistically-robust gradient aggregation rule whose goal is to tolerate Byzantine workers.

BIG-bench Machine Learning

Fast and Robust Distributed Learning in High Dimension

no code implementations5 May 2019 El-Mahdi El-Mhamdi, Rachid Guerraoui, Sébastien Rouault

Given $n$ workers, $f$ of which are arbitrary malicious (Byzantine) and $m=n-f$ are not, we prove that multi-Bulyan can ensure a strong form of Byzantine resilience, as well as an ${\frac{m}{n}}$ slowdown, compared to averaging, the fastest (but non Byzantine resilient) rule for distributed machine learning.

BIG-bench Machine Learning Vocal Bursts Intensity Prediction

The Probabilistic Fault Tolerance of Neural Networks in the Continuous Limit

3 code implementations ICLR 2020 El-Mahdi El-Mhamdi, Rachid Guerraoui, Andrei Kucharavy, Sergei Volodin

We study fault tolerance of neural networks subject to small random neuron/weight crash failures in a probabilistic setting.

Removing Algorithmic Discrimination (With Minimal Individual Error)

no code implementations7 Jun 2018 El Mahdi El Mhamdi, Rachid Guerraoui, Lê Nguyên Hoang, Alexandre Maurer

We first solve the problem analytically in the case of two populations, with a uniform bonus-malus on the zones where each population is a majority.

Virtuously Safe Reinforcement Learning

no code implementations29 May 2018 Henrik Aslund, El Mahdi El Mhamdi, Rachid Guerraoui, Alexandre Maurer

We show that when a third party, the adversary, steps into the two-party setting (agent and operator) of safely interruptible reinforcement learning, a trade-off has to be made between the probability of following the optimal policy in the limit, and the probability of escaping a dangerous situation created by the adversary.

reinforcement-learning Reinforcement Learning (RL) +2

Asynchronous Byzantine Machine Learning (the case of SGD)

1 code implementation ICML 2018 Georgios Damaskinos, El Mahdi El Mhamdi, Rachid Guerraoui, Rhicheek Patra, Mahsa Taziki

The dampening component bounds the convergence rate by adjusting to stale information through a generic gradient weighting scheme.

BIG-bench Machine Learning

The Hidden Vulnerability of Distributed Learning in Byzantium

1 code implementation ICML 2018 El Mahdi El Mhamdi, Rachid Guerraoui, Sébastien Rouault

Based on this leeway, we build a simple attack, and experimentally show its strong to utmost effectivity on CIFAR-10 and MNIST.

Learning to Gather without Communication

1 code implementation21 Feb 2018 El Mahdi El Mhamdi, Rachid Guerraoui, Alexandre Maurer, Vladislav Tempez

A standard belief on emerging collective behavior is that it emerges from simple individual rules.

Multi-agent Reinforcement Learning Position

Deep Learning Works in Practice. But Does it Work in Theory?

no code implementations31 Jan 2018 Lê Nguyên Hoang, Rachid Guerraoui

Deep learning relies on a very specific kind of neural networks: those superposing several neural layers.

speech-recognition Speech Recognition

Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent

1 code implementation NeurIPS 2017 Peva Blanchard, El Mahdi El Mhamdi, Rachid Guerraoui, Julien Stainer

We propose \emph{Krum}, an aggregation rule that satisfies our resilience property, which we argue is the first provably Byzantine-resilient algorithm for distributed SGD.

BIG-bench Machine Learning

Sequences, Items And Latent Links: Recommendation With Consumed Item Packs

no code implementations16 Nov 2017 Rachid Guerraoui, Erwan Le Merrer, Rhicheek Patra, Jean-Ronan Vigouroux

In this paper, we introduce the notion of consumed item pack (CIP) which enables to link users (or items) based on their implicit analogous consumption behavior.

Collaborative Filtering

On The Robustness of a Neural Network

no code implementations25 Jul 2017 El Mahdi El Mhamdi, Rachid Guerraoui, Sebastien Rouault

This bound involves dependencies on the network parameters that can be seen as being too pessimistic in the average case.

When Neurons Fail

no code implementations27 Jun 2017 El Mahdi El Mhamdi, Rachid Guerraoui

We view a neural network as a distributed system of which neurons can fail independently, and we evaluate its robustness in the absence of any (recovery) learning phase.

Personalized and Private Peer-to-Peer Machine Learning

no code implementations23 May 2017 Aurélien Bellet, Rachid Guerraoui, Mahsa Taziki, Marc Tommasi

The rise of connected personal devices together with privacy concerns call for machine learning algorithms capable of leveraging the data of a large number of agents to learn personalized models under strong privacy requirements.

BIG-bench Machine Learning

Dynamic Safe Interruptibility for Decentralized Multi-Agent Reinforcement Learning

no code implementations NeurIPS 2017 El Mahdi El Mhamdi, Rachid Guerraoui, Hadrien Hendrikx, Alexandre Maurer

We give realistic sufficient conditions on the learning algorithm to enable dynamic safe interruptibility in the case of joint action learners, yet show that these conditions are not sufficient for independent learners.

Multi-agent Reinforcement Learning reinforcement-learning +1

Byzantine-Tolerant Machine Learning

no code implementations8 Mar 2017 Peva Blanchard, El Mahdi El Mhamdi, Rachid Guerraoui, Julien Stainer

The growth of data, the need for scalability and the complexity of models used in modern machine learning calls for distributed implementations.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.