no code implementations • 30 Sep 2024 • Youssef Allouah, Abdellah El Mrini, Rachid Guerraoui, Nirupam Gupta, Rafael Pinot

Personalization addresses this issue by enabling each client to have a different model tailored to their own data while simultaneously benefiting from the other clients' data.

no code implementations • 7 Aug 2024 • Beatriz Borges, Negar Foroutan, Deniz Bayazit, Anna Sotnikova, Syrielle Montariol, Tanya Nazaretzky, Mohammadreza Banaei, Alireza Sakhaeirad, Philippe Servant, Seyed Parsa Neshaei, Jibril Frej, Angelika Romanou, Gail Weiss, Sepideh Mamooler, Zeming Chen, Simin Fan, Silin Gao, Mete Ismayilzada, Debjit Paul, Alexandre Schöpfer, Andrej Janchevski, Anja Tiede, Clarence Linden, Emanuele Troiani, Francesco Salvi, Freya Behrens, Giacomo Orsi, Giovanni Piccioli, Hadrien Sevel, Louis Coulon, Manuela Pineros-Rodriguez, Marin Bonnassies, Pierre Hellich, Puck van Gerwen, Sankalp Gambhir, Solal Pirelli, Thomas Blanchard, Timothée Callens, Toni Abi Aoun, Yannick Calvino Alonso, Yuri Cho, Alberto Chiappa, Antonio Sclocchi, Étienne Bruno, Florian Hofhammer, Gabriel Pescia, Geovani Rizk, Leello Dadi, Lucas Stoffl, Manoel Horta Ribeiro, Matthieu Bovel, Yueyang Pan, Aleksandra Radenovic, Alexandre Alahi, Alexander Mathis, Anne-Florence Bitbol, Boi Faltings, Cécile Hébert, Devis Tuia, François Maréchal, George Candea, Giuseppe Carleo, Jean-Cédric Chappelier, Nicolas Flammarion, Jean-Marie Fürbringer, Jean-Philippe Pellet, Karl Aberer, Lenka Zdeborová, Marcel Salathé, Martin Jaggi, Martin Rajman, Mathias Payer, Matthieu Wyart, Michael Gastpar, Michele Ceriotti, Ola Svensson, Olivier Lévêque, Paolo Ienne, Rachid Guerraoui, Robert West, Sanidhya Kashyap, Valerio Piazza, Viesturs Simanis, Viktor Kuncak, Volkan Cevher, Philippe Schwaller, Sacha Friedli, Patrick Jermann, Tanja Kaser, Antoine Bosselut

We investigate the potential scale of this vulnerability by measuring the degree to which AI assistants can complete assessment questions in standard university-level STEM courses.

no code implementations • 23 May 2024 • Youssef Allouah, Rachid Guerraoui, Nirupam Gupta, Ahmed Jellouli, Geovani Rizk, John Stephan

Byzantine-resilient distributed machine learning seeks to achieve robust learning performance in the presence of misbehaving or adversarial workers.

no code implementations • 23 May 2024 • Rachid Guerraoui, Rafael Pinot, Geovani Rizk, John Stephan, François Taiani

Batch normalization has proven to be a very beneficial mechanism to accelerate the training and improve the accuracy of deep neural networks in centralized environments.

1 code implementation • 2 May 2024 • Youssef Allouah, Anastasia Koloskova, Aymane El Firdoussi, Martin Jaggi, Rachid Guerraoui

Decentralized learning is appealing as it enables the scalable usage of large amounts of distributed data and resources (without resorting to any central entity), while promoting privacy since every user minimizes the direct exposure of their data.

no code implementations • 1 May 2024 • Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Rafael Pinot

It has been argued that the seemingly weaker threat model where only workers' local datasets get poisoned is more reasonable.

no code implementations • 20 Feb 2024 • Youssef Allouah, Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Rafael Pinot, Geovani Rizk, Sasha Voitovych

The natural approach to robustify FL against adversarial clients is to replace the simple averaging operation at the server in the standard $\mathsf{FedAvg}$ algorithm by a \emph{robust averaging rule}.

no code implementations • 22 Dec 2023 • Youssef Allouah, Rachid Guerraoui, John Stephan

The success of machine learning (ML) applications relies on vast datasets and distributed architectures which, as they grow, present major challenges.

1 code implementation • NeurIPS 2023 • Martijn de Vos, Sadegh Farhadkhani, Rachid Guerraoui, Anne-Marie Kermarrec, Rafael Pires, Rishi Sharma

We present Epidemic Learning (EL), a simple yet powerful decentralized learning (DL) algorithm that leverages changing communication topologies to achieve faster model convergence compared to conventional DL approaches.

no code implementations • 11 Sep 2023 • Antoine Choffrut, Rachid Guerraoui, Rafael Pinot, Renaud Sirdey, John Stephan, Martin Zuber

SABLE leverages HTS, a novel and efficient homomorphic operator implementing the prominent coordinate-wise trimmed mean robust aggregator.

no code implementations • 20 May 2023 • Andrei Kucharavy, Rachid Guerraoui, Ljiljana Dolamic

In this paper, we show that a class of evolutionary algorithms (EAs) inspired by the Gillespie-Orr Mutational Landscapes model for natural evolution is formally equivalent to SGD in certain settings and, in practice, is well adapted to large ANNs.

no code implementations • 20 Apr 2023 • Andrei Kucharavy, Matteo Monti, Rachid Guerraoui, Ljiljana Dolamic

We then leverage this definition to show that a general class of gradient-free ML algorithms - ($1,\lambda$)-Evolutionary Search - can be combined with classical distributed consensus algorithms to generate gradient-free byzantine-resilient distributed learning algorithms.

1 code implementation • 18 Apr 2023 • Da Silva Gameiro Henrique, Andrei Kucharavy, Rachid Guerraoui

This prominence amplified prior concerns regarding the misuse of LLMs and led to the emergence of numerous tools to detect LLMs in the wild.

no code implementations • 9 Feb 2023 • Youssef Allouah, Rachid Guerraoui, Nirupam Gupta, Rafael Pinot, John Stephan

The latter amortizes the dependence on the dimension in the error (caused by adversarial workers and DP), while being agnostic to the statistical properties of the data.

no code implementations • 3 Feb 2023 • Youssef Allouah, Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Rafael Pinot, John Stephan

Byzantine machine learning (ML) aims to ensure the resilience of distributed learning algorithms to misbehaving (or Byzantine) machines.

1 code implementation • 16 Oct 2022 • Diana Petrescu, Arsany Guirguis, Do Le Quoc, Javier Picorel, Rachid Guerraoui, Florin Dinu

Storage disaggregation underlies today's cloud and is naturally complemented by pushing down some computation to storage, thus mitigating the potential network bottleneck between the storage and compute tiers.

no code implementations • 30 Sep 2022 • El-Mahdi El-Mhamdi, Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Lê-Nguyên Hoang, Rafael Pinot, Sébastien Rouault, John Stephan

Large AI Models (LAIMs), of which large language models are the most prominent recent example, showcase some impressive performance.

1 code implementation • 22 Sep 2022 • Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Lê Nguyên Hoang, Rafael Pinot, John Stephan

We present MoNNA, a new algorithm that (a) is provably robust under standard assumptions and (b) has a gradient computation overhead that is linear in the fraction of faulty machines, which is conjectured to be tight.

no code implementations • 24 May 2022 • Sadegh Farhadkhani, Rachid Guerraoui, Nirupam Gupta, Rafael Pinot, John Stephan

We present \emph{RESAM (RESilient Averaging of Momentums)}, a unified framework that makes it simple to establish optimal Byzantine resilience, relying only on standard machine learning assumptions.

1 code implementation • 17 Feb 2022 • Sadegh Farhadkhani, Rachid Guerraoui, Lê-Nguyên Hoang, Oscar Villemaud

More specifically, we prove that every gradient attack can be reduced to data poisoning, in any personalized federated learning system with PAC guarantees (which we show are both desirable and realistic).

no code implementations • 8 Oct 2021 • Rachid Guerraoui, Nirupam Gupta, Rafael Pinot, Sebastien Rouault, John Stephan

Privacy and Byzantine resilience (BR) are two crucial requirements of modern-day distributed machine learning.

no code implementations • 29 Sep 2021 • Andrei Kucharavy, Ljiljana Dolamic, Rachid Guerraoui

Be it in natural language generation or in the image generation, massive performances gains have been achieved in the last years.

1 code implementation • 4 Jun 2021 • Sadegh Farhadkhani, Rachid Guerraoui, Lê-Nguyên Hoang

We prove in this paper that, perhaps surprisingly, incentivizing data misreporting is not a fatality.

1 code implementation • 16 Feb 2021 • Rachid Guerraoui, Nirupam Gupta, Rafaël Pinot, Sébastien Rouault, John Stephan

This paper addresses the problem of combining Byzantine resilience with privacy in machine learning (ML).

no code implementations • ICLR 2021 • El Mahdi El Mhamdi, Rachid Guerraoui, Sébastien Rouault

We propose a practical method which, despite increasing the variance, reduces the variance-norm ratio, mitigating the identified weakness.

1 code implementation • 13 Oct 2020 • Marcos K. Aguilera, Naama Ben-David, Rachid Guerraoui, Virendra J. Marathe, Athanasios Xygkis, Igor Zablotchi

We propose Mu, a system that takes less than 1. 3 microseconds to replicate a (small) request in memory, and less than a millisecond to fail-over the system - this cuts the replication and fail-over latencies of the prior systems by at least 61% and 90%.

Distributed, Parallel, and Cluster Computing

1 code implementation • 12 Oct 2020 • Rachid Guerraoui, Arsany Guirguis, Jérémy Max Plassmann, Anton Alexandre Ragot, Sébastien Rouault

We present Garfield, a library to transparently make machine learning (ML) applications, initially built with popular (but fragile) frameworks, e. g., TensorFlow and PyTorch, Byzantine-resilient.

1 code implementation • 6 Aug 2020 • Rachid Guerraoui, Alex Kogan, Virendra J. Marathe, Igor Zablotchi

Then we present the first algorithm that requires k+1 CASes per call to k-CAS in the common uncontended case.

Distributed, Parallel, and Cluster Computing

no code implementations • NeurIPS 2021 • El-Mahdi El-Mhamdi, Sadegh Farhadkhani, Rachid Guerraoui, Arsany Guirguis, Lê Nguyên Hoang, Sébastien Rouault

We study Byzantine collaborative learning, where $n$ nodes seek to collectively learn from each others' local data.

no code implementations • 12 Jun 2020 • Georgios Damaskinos, Rachid Guerraoui, Anne-Marie Kermarrec, Vlad Nitu, Rhicheek Patra, Francois Taiani

Federated Learning (FL) is very appealing for its privacy benefits: essentially, a global model is trained with updates computed on mobile devices while keeping the data of users local.

no code implementations • 12 Jun 2020 • Georgios Damaskinos, Celestine Mendler-Dünner, Rachid Guerraoui, Nikolaos Papandreou, Thomas Parnell

In this paper we tackle the challenge of making the stochastic coordinate descent algorithm differentially private.

1 code implementation • 22 May 2020 • Andrei Kucharavy, El Mahdi El Mhamdi, Rachid Guerraoui

Generative adversarial networks (GANs) are pairs of artificial neural networks that are trained one against each other.

1 code implementation • 28 Feb 2020 • El-Mahdi El-Mhamdi, Rachid Guerraoui, Sébastien Rouault

Momentum is a variant of gradient descent that has been proposed for its benefits on convergence.

no code implementations • 18 Nov 2019 • El Mahdi El Mhamdi, Rachid Guerraoui, Arsany Guirguis

We moreover show that the throughput gain of LiuBei compared to another state-of-the-art Byzantine-resilient ML algorithm (that assumes network asynchrony) is 70%.

1 code implementation • 13 Jun 2019 • Rachid Guerraoui, Petr Kuznetsov, Matteo Monti, Matej Pavlovic, Dragos-Adrian Seredinschi

As stated in the original paper by Nakamoto, at the heart of these systems lies the problem of preventing double-spending; this is usually solved by achieving consensus on the order of transfers among the participants.

Distributed, Parallel, and Cluster Computing

no code implementations • 29 May 2019 • Marcos K. Aguilera, Naama Ben-David, Rachid Guerraoui, Virendra Marathe, Igor Zablotchi

This technology allows a process to directly read and write the memory of a remote host, with a mechanism to control access permissions.

Distributed, Parallel, and Cluster Computing Data Structures and Algorithms

no code implementations • 5 May 2019 • El-Mahdi El-Mhamdi, Rachid Guerraoui, Sébastien Rouault

Given $n$ workers, $f$ of which are arbitrary malicious (Byzantine) and $m=n-f$ are not, we prove that multi-Bulyan can ensure a strong form of Byzantine resilience, as well as an ${\frac{m}{n}}$ slowdown, compared to averaging, the fastest (but non Byzantine resilient) rule for distributed machine learning.

BIG-bench Machine Learning Vocal Bursts Intensity Prediction

no code implementations • 5 May 2019 • El-Mahdi El-Mhamdi, Rachid Guerraoui, Arsany Guirguis, Lê Nguyên Hoang, Sébastien Rouault

The third, Minimum-Diameter Averaging (MDA), is a statistically-robust gradient aggregation rule whose goal is to tolerate Byzantine workers.

3 code implementations • ICLR 2020 • El-Mahdi El-Mhamdi, Rachid Guerraoui, Andrei Kucharavy, Sergei Volodin

We study fault tolerance of neural networks subject to small random neuron/weight crash failures in a probabilistic setting.

no code implementations • 7 Jun 2018 • El Mahdi El Mhamdi, Rachid Guerraoui, Lê Nguyên Hoang, Alexandre Maurer

We first solve the problem analytically in the case of two populations, with a uniform bonus-malus on the zones where each population is a majority.

no code implementations • 29 May 2018 • Henrik Aslund, El Mahdi El Mhamdi, Rachid Guerraoui, Alexandre Maurer

We show that when a third party, the adversary, steps into the two-party setting (agent and operator) of safely interruptible reinforcement learning, a trade-off has to be made between the probability of following the optimal policy in the limit, and the probability of escaping a dangerous situation created by the adversary.

1 code implementation • ICML 2018 • El Mahdi El Mhamdi, Rachid Guerraoui, Sébastien Rouault

Based on this leeway, we build a simple attack, and experimentally show its strong to utmost effectivity on CIFAR-10 and MNIST.

1 code implementation • ICML 2018 • Georgios Damaskinos, El Mahdi El Mhamdi, Rachid Guerraoui, Rhicheek Patra, Mahsa Taziki

The dampening component bounds the convergence rate by adjusting to stale information through a generic gradient weighting scheme.

1 code implementation • 21 Feb 2018 • El Mahdi El Mhamdi, Rachid Guerraoui, Alexandre Maurer, Vladislav Tempez

A standard belief on emerging collective behavior is that it emerges from simple individual rules.

no code implementations • 31 Jan 2018 • Lê Nguyên Hoang, Rachid Guerraoui

Deep learning relies on a very specific kind of neural networks: those superposing several neural layers.

2 code implementations • NeurIPS 2017 • Peva Blanchard, El Mahdi El Mhamdi, Rachid Guerraoui, Julien Stainer

We propose \emph{Krum}, an aggregation rule that satisfies our resilience property, which we argue is the first provably Byzantine-resilient algorithm for distributed SGD.

no code implementations • 16 Nov 2017 • Rachid Guerraoui, Erwan Le Merrer, Rhicheek Patra, Jean-Ronan Vigouroux

In this paper, we introduce the notion of consumed item pack (CIP) which enables to link users (or items) based on their implicit analogous consumption behavior.

no code implementations • 25 Jul 2017 • El Mahdi El Mhamdi, Rachid Guerraoui, Sebastien Rouault

This bound involves dependencies on the network parameters that can be seen as being too pessimistic in the average case.

no code implementations • 27 Jun 2017 • El Mahdi El Mhamdi, Rachid Guerraoui

We view a neural network as a distributed system of which neurons can fail independently, and we evaluate its robustness in the absence of any (recovery) learning phase.

no code implementations • 23 May 2017 • Aurélien Bellet, Rachid Guerraoui, Mahsa Taziki, Marc Tommasi

The rise of connected personal devices together with privacy concerns call for machine learning algorithms capable of leveraging the data of a large number of agents to learn personalized models under strong privacy requirements.

no code implementations • NeurIPS 2017 • El Mahdi El Mhamdi, Rachid Guerraoui, Hadrien Hendrikx, Alexandre Maurer

We give realistic sufficient conditions on the learning algorithm to enable dynamic safe interruptibility in the case of joint action learners, yet show that these conditions are not sufficient for independent learners.

Multi-agent Reinforcement Learning
reinforcement-learning
**+2**

no code implementations • 8 Mar 2017 • Peva Blanchard, El Mahdi El Mhamdi, Rachid Guerraoui, Julien Stainer

The growth of data, the need for scalability and the complexity of models used in modern machine learning calls for distributed implementations.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.