no code implementations • 5 Dec 2022 • Francesco Malandrino, Carla Fabiana Chiasserini
Convergence bounds are one of the main tools to obtain information on the performance of a distributed machine learning task, before running the task itself.
no code implementations • 2 Dec 2022 • Francesco Malandrino, Giuseppe Di Giacomo, Armin Karamzade, Marco Levorato, Carla Fabiana Chiasserini
To make machine learning (ML) sustainable and apt to run on the diverse devices where relevant data is, it is essential to compress ML models as needed, while still meeting the required learning quality and time performance.
no code implementations • 23 Feb 2022 • Francesco Malandrino, Carla Fabiana Chiasserini, Giuseppe Di Giacomo
In the mobile-edge-cloud continuum, a plethora of heterogeneous data sources and computation-capable nodes are available.
no code implementations • 19 Jan 2022 • Francesco Malandrino, Carla Fabiana Chiasserini
Traditionally, distributed machine learning takes the guise of (i) different nodes training the same model (as in federated learning), or (ii) one model being split among multiple nodes (as in distributed stochastic gradient descent).
no code implementations • 18 Feb 2021 • Morteza Golkarifard, Carla Fabiana Chiasserini, Francesco Malandrino, Ali Movaghar
5G networks are going to support a variety of vertical services, with a diverse set of key performance indicators (KPIs), by using enabling technologies such as software-defined networking and network function virtualization.
Networking and Internet Architecture
no code implementations • 5 Feb 2021 • Francesco Malandrino, Carla Fabiana Chiasserini, Nuria Molner, Antonio de la Oliva
We then formulate the problem of selecting (i) which learning and information nodes should cooperate to complete the learning task, and (ii) the number of iterations to perform, in order to minimize the learning cost while meeting the target prediction error and execution time.