no code implementations • 27 Aug 2024 • Charlotte Rodriguez, Laura Degioanni, Laetitia Kameni, Richard Vidal, Giovanni Neglia
Monitoring, understanding, and optimizing the energy consumption of Machine Learning (ML) are various reasons why it is necessary to evaluate the energy usage of ML.
no code implementations • 7 May 2024 • Caelin Kaplan, Angelo Rodio, Tareq Si Salem, Chuan Xu, Giovanni Neglia
As Internet of Things (IoT) technology advances, end devices like sensors and smartphones are progressively equipped with AI models tailored to their local memory and computational constraints.
no code implementations • 7 May 2024 • Angelo Rodio, Giovanni Neglia
Federated learning algorithms, such as FedAvg, are negatively affected by data heterogeneity and partial client participation.
no code implementations • 2 May 2024 • Damiano Carra, Giovanni Neglia
Recently, a new class of policies has emerged that are robust to varying traffic patterns.
no code implementations • 20 Feb 2024 • Franco Galante, Giovanni Neglia, Emilio Leonardi
In numerous settings, agents lack sufficient data to directly learn a model.
no code implementations • 18 Oct 2023 • Caelin G. Kaplan, Chuan Xu, Othmane Marfoq, Giovanni Neglia, Anderson Santana de Oliveira
Within the realm of privacy-preserving machine learning, empirical privacy defenses have been proposed as a solution to achieve satisfactory levels of training data privacy without a significant drop in model utility.
no code implementations • 5 Sep 2023 • Younes Ben Mazziane, Francescomaria Faticanti, Giovanni Neglia, Sara Alouf
Online learning algorithms have been successfully used to design caching policies with regret guarantees.
no code implementations • 11 Jun 2023 • Marina Costantini, Giovanni Neglia, Thrasyvoulos Spyropoulos
We analyze the convergence of FedDec under the assumptions of non-iid data distribution, partial device participation, and smooth and strongly convex costs, and show that inter-agent communication alleviates the negative impact of infrequent communication rounds with the server by reducing the dependence on the number of local updates $H$ from $O(H^2)$ to $O(H)$.
no code implementations • 5 Jun 2023 • Batiste Le Bars, Aurélien Bellet, Marc Tommasi, Kevin Scaman, Giovanni Neglia
On the contrary, we show, for convex, strongly convex and non-convex functions, that D-SGD can always recover generalization bounds analogous to those of classical SGD, suggesting that the choice of graph does not matter.
1 code implementation • 11 Jan 2023 • Angelo Rodio, Francescomaria Faticanti, Othmane Marfoq, Giovanni Neglia, Emilio Leonardi
To this purpose, CA-Fed dynamically adapts the weight given to each client and may ignore clients with low availability and large correlation.
1 code implementation • 4 Jan 2023 • Othmane Marfoq, Giovanni Neglia, Laetitia Kameni, Richard Vidal
Federated learning (FL) is an effective solution to train machine learning models on the increasing amount of data generated by IoT devices and smartphones while keeping such data localized.
no code implementations • 28 Oct 2022 • Ilias Driouich, Chuan Xu, Giovanni Neglia, Frederic Giroire, Eoin Thomas
Additionally, we propose a novel model-based attribute inference attack in federated learning leveraging the local model reconstruction attack.
1 code implementation • 10 Oct 2022 • Jean Ogier du Terrail, Samy-Safwan Ayed, Edwige Cyffers, Felix Grimberg, Chaoyang He, Regis Loeb, Paul Mangold, Tanguy Marchand, Othmane Marfoq, Erum Mushtaq, Boris Muzellec, Constantin Philippenko, Santiago Silva, Maria Teleńczuk, Shadi Albarqouni, Salman Avestimehr, Aurélien Bellet, Aymeric Dieuleveut, Martin Jaggi, Sai Praneeth Karimireddy, Marco Lorenzi, Giovanni Neglia, Marc Tommasi, Mathieu Andreux
In this work, we propose a novel cross-silo dataset suite focused on healthcare, FLamby (Federated Learning AMple Benchmark of Your cross-silo strategies), to bridge the gap between theory and practice of cross-silo FL.
2 code implementations • 17 Nov 2021 • Othmane Marfoq, Giovanni Neglia, Laetitia Kameni, Richard Vidal
Federated learning allows clients to collaboratively learn statistical models while keeping their data local.
1 code implementation • 31 Oct 2021 • Oualid Zari, Chuan Xu, Giovanni Neglia
In cross-device federated learning (FL) setting, clients such as mobiles cooperate with the server to train a global machine learning model, while maintaining their data locally.
4 code implementations • NeurIPS 2021 • Othmane Marfoq, Giovanni Neglia, Aurélien Bellet, Laetitia Kameni, Richard Vidal
The increasing size of data generated by smartphones and IoT devices motivated the development of Federated Learning (FL), a framework for on-device collaborative training of machine learning models.
no code implementations • 9 Feb 2021 • Michele Garetto, Emilio Leonardi, Giovanni Neglia
Similarity caching systems have recently attracted the attention of the scientific community, as they can be profitably used in many application contexts, like multimedia retrieval, advertising, object recognition, recommender systems and online content-match applications.
1 code implementation • NeurIPS 2020 • Othmane Marfoq, Chuan Xu, Giovanni Neglia, Richard Vidal
Federated learning usually employs a client-server architecture where an orchestrator iteratively aggregates model updates from remote clients and pushes them back a refined model.
no code implementations • 30 Apr 2020 • Chuan Xu, Giovanni Neglia, Nicola Sebastianelli
This paradigm consists of $n$ workers, which iteratively compute updates of the model parameters, and a stateful PS, which waits and aggregates all updates to generate a new estimate of model parameters and sends it back to the workers for a new iteration.
no code implementations • 28 Feb 2020 • Giovanni Neglia, Chuan Xu, Don Towsley, Gianmarco Calbi
Consensus-based distributed optimization methods have recently been advocated as alternatives to parameter server and ring all-reduce paradigms for large scale training of machine learning models.