no code implementations • 20 Aug 2024 • Mirko Nardi, Lorenzo Valerio, Andrea Passarella
Federated Learning (FL) is a pivotal approach in decentralized machine learning, especially when data privacy is crucial and direct data sharing is impractical.
no code implementations • 14 Aug 2024 • Alessio Mora, Lorenzo Valerio, Paolo Bellavista, Andrea Passarella
Federated Learning (FL) promises better privacy guarantees for individuals' data when machine learning models are collaboratively trained.
no code implementations • 3 May 2024 • Luigi Palmieri, Chiara Boldrini, Lorenzo Valerio, Andrea Passarella, Marco Conti, János Kertész
Through these configurations, we are able to show the non-trivial interplay between the properties of the network connecting nodes, the persistence of knowledge acquired collectively before disruption or lack thereof, and the effect of data availability pre- and post-disruption.
1 code implementation • 23 Mar 2024 • Arash Badie-Modiri, Chiara Boldrini, Lorenzo Valerio, János Kertész, Márton Karsai
Fully decentralised federated learning enables collaborative training of individual machine learning models on distributed devices on a communication network while keeping the training data localised.
no code implementations • 28 Feb 2024 • Luigi Palmieri, Chiara Boldrini, Lorenzo Valerio, Andrea Passarella, Marco Conti
We highlight the challenges in transferring knowledge from peripheral to central nodes, attributed to a dilution effect during model aggregation.
no code implementations • 7 Dec 2023 • Lorenzo Valerio, Chiara Boldrini, Andrea Passarella, János Kertész, Márton Karsai, Gerardo Iñiguez
Federated Learning (FL) is a well-known framework for successfully performing a learning task in an edge computing scenario where the devices involved have limited resources and incomplete data representation.
no code implementations • 4 Oct 2023 • Luigi Palmieri, Chiara Boldrini, Lorenzo Valerio, Andrea Passarella, Marco Conti
Thus, fully decentralized learning can help in this case.
no code implementations • 29 Jul 2023 • Luigi Palmieri, Lorenzo Valerio, Chiara Boldrini, Andrea Passarella
Specifically, we highlight the different roles in this process of more or less connected nodes (hubs and leaves), as well as that of macroscopic network properties (primarily, degree distribution and modularity).
1 code implementation • 9 Sep 2022 • Mirko Nardi, Lorenzo Valerio, Andrea Passarella
Experiments show that our method is robust, and it can detect communities consistent with the ideal partitioning in which groups of clients having the same inlier patterns are known.
no code implementations • 1 May 2022 • Saira Bano, Achilles Machumilane, Lorenzo Valerio, Pietro Cassarà, Alberto Gotta
The federated gateways of 3D network help to enhance the global knowledge of network traffic to improve the accuracy of anomaly and intrusion detection and service identification of a new traffic flow.
no code implementations • 1 Oct 2021 • Lorenzo Valerio, Raffaele Bruno, Andrea Passarella
We show that our system based on Reinforcement Learning is able to automatically learn a very efficient strategy to reduce the traffic on the cellular network, without relying on any additional context information about the opportunistic network.
no code implementations • 27 Sep 2021 • Lorenzo Valerio, Andrea Passarella, Marco Conti
In the specific case analysed in the paper, we focus on a learning task, considering two distributed learning algorithms.
no code implementations • 23 Sep 2021 • Lorenzo Valerio, Marco Conti, Andrea Passarella
We analyse the performance of different configurations of the distributed learning framework, in terms of (i) accuracy obtained in the learning task and (ii) energy spent to send data between the involved nodes.
no code implementations • IEEE Transactions on Vehicular Technology 2022 • Pietro Cassarà, Alberto Gotta, Lorenzo Valerio
In this work, we address such a problem by proposing a federated feature selection algorithm where all the AVs collaborate to filter out, iteratively, the redundant or irrelevant attributes in a distributed manner, without any exchange of raw data.
no code implementations • 9 Dec 2020 • Lorenzo Valerio, Andrea Passarella, Marco Conti
Decentralising AI tasks on several cooperative devices means identifying the optimal set of locations or Collection Points (CP for short) to use, in the continuum between full centralisation (i. e., all data on a single device) and full decentralisation (i. e., data on source locations).
no code implementations • 17 Nov 2020 • Lorenzo Valerio, Franco Maria Nardini, Andrea Passarella, Raffaele Perego
Results show that DynHP compresses a NN up to $10$ times without significant performance drops (up to $3. 5\%$ additional error w. r. t.