no code implementations • 15 Apr 2024 • Sayan Biswas, Mathieu Even, Anne-Marie Kermarrec, Laurent Massoulie, Rafael Pires, Rishi Sharma, Martijn de Vos
We theoretically prove the convergence of Shatter and provide a formal analysis demonstrating how Shatter reduces the efficacy of attacks compared to when exchanging full models between participating nodes.
no code implementations • 18 Mar 2024 • Sayan Biswas, Davide Frey, Romaric Gaudel, Anne-Marie Kermarrec, Dimitri Lerévérend, Rafael Pires, Rishi Sharma, François Taïani
This paper introduces ZIP-DL, a novel privacy-aware decentralized learning (DL) algorithm that relies on adding correlated noise to each model update during the model training process.
no code implementations • 13 Feb 2024 • Martijn de Vos, Akash Dhasade, Jade Garcia Bourrée, Anne-Marie Kermarrec, Erwan Le Merrer, Benoit Rottembourg, Gilles Tredan
We theoretically study their interplay when agents operate independently or collaborate.
no code implementations • 27 Nov 2023 • Akash Dhasade, Yaohong Ding, Song Guo, Anne-Marie Kermarrec, Martijn de Vos, Leijie Wu
We introduce QuickDrop, an efficient and original FU method that utilizes dataset distillation (DD) to accelerate unlearning and drastically reduces computational overhead compared to existing approaches.
1 code implementation • NeurIPS 2023 • Martijn de Vos, Sadegh Farhadkhani, Rachid Guerraoui, Anne-Marie Kermarrec, Rafael Pires, Rishi Sharma
We present Epidemic Learning (EL), a simple yet powerful decentralized learning (DL) algorithm that leverages changing communication topologies to achieve faster model convergence compared to conventional DL approaches.
1 code implementation • 7 Jun 2023 • Akash Dhasade, Anne-Marie Kermarrec, Rafael Pires, Rishi Sharma, Milos Vujasinovic, Jeffrey Wigger
Decentralized learning (DL) systems have been gaining popularity because they avoid raw data sharing by communicating only model parameters, hence preserving data confidentiality.
1 code implementation • 17 Apr 2023 • Akash Dhasade, Anne-Marie Kermarrec, Rafael Pires, Rishi Sharma, Milos Vujasinovic
Decentralized learning (DL) has gained prominence for its potential benefits in terms of scalability, privacy, and fault tolerance.
no code implementations • 9 Apr 2022 • Batiste Le Bars, Aurélien Bellet, Marc Tommasi, Erick Lavoie, Anne-Marie Kermarrec
One of the key challenges in decentralized and federated learning is to design algorithms that efficiently deal with highly heterogeneous data distributions across agents.
1 code implementation • 23 Feb 2022 • Akash Dhasade, Nevena Dresevic, Anne-Marie Kermarrec, Rafael Pires
We analyze the impact of raw data sharing in both deep neural network (DNN) and matrix factorization (MF) recommenders and showcase the benefits of trusted environments in a full-fledged implementation of REX.
no code implementations • 21 Oct 2021 • Mohamed Yassine Boukhari, Akash Dhasade, Anne-Marie Kermarrec, Rafael Pires, Othmane Safsafi, Rishi Sharma
GeL enables constrained edge devices to perform additional learning through guessed updates on top of gradient-based steps.
no code implementations • 15 Apr 2021 • Aurélien Bellet, Anne-Marie Kermarrec, Erick Lavoie
The convergence speed of machine learning models trained with Federated Learning is significantly affected by heterogeneous data partitions, even more so in a fully decentralized setting without a central server.
no code implementations • 22 Oct 2020 • George Giakkoupis, Anne-Marie Kermarrec, Olivier Ruas, François Taïani
K-Nearest-Neighbors (KNN) graphs are central to many emblematic data mining and machine-learning applications.
no code implementations • 12 Jun 2020 • Georgios Damaskinos, Rachid Guerraoui, Anne-Marie Kermarrec, Vlad Nitu, Rhicheek Patra, Francois Taiani
Federated Learning (FL) is very appealing for its privacy benefits: essentially, a global model is trained with updates computed on mobile devices while keeping the data of users local.
no code implementations • 16 May 2019 • Fabien André, Anne-Marie Kermarrec, Nicolas Le Scouarnec
We propose a novel approach that allows 16-bit quantizers to offer the same response time as 8-bit quantizers, while still providing a boost of accuracy.
2 code implementations • 21 Dec 2018 • Fabien André, Anne-Marie Kermarrec, Nicolas Le Scouarnec
Efficient Nearest Neighbor (NN) search in high-dimensional spaces is a foundation of many multimedia retrieval systems.
1 code implementation • 24 Apr 2017 • Fabien André, Anne-Marie Kermarrec, Nicolas Le Scouarnec
This allows Quick ADC to exceed the performance of state-of-the-art systems, e. g., it achieves a Recall@100 of 0. 94 in 3. 4 ms on 1 billion SIFT descriptors (128-bit codes).