Search Results for author: Carla Fabiana Chiasserini

Found 6 papers, 0 papers with code

Unexpectedly Useful: Convergence Bounds And Real-World Distributed Learning

no code implementations5 Dec 2022 Francesco Malandrino, Carla Fabiana Chiasserini

Convergence bounds are one of the main tools to obtain information on the performance of a distributed machine learning task, before running the task itself.

Federated Learning

Matching DNN Compression and Cooperative Training with Resources and Data Availability

no code implementations2 Dec 2022 Francesco Malandrino, Giuseppe Di Giacomo, Armin Karamzade, Marco Levorato, Carla Fabiana Chiasserini

To make machine learning (ML) sustainable and apt to run on the diverse devices where relevant data is, it is essential to compress ML models as needed, while still meeting the required learning quality and time performance.

Efficient Distributed DNNs in the Mobile-edge-cloud Continuum

no code implementations23 Feb 2022 Francesco Malandrino, Carla Fabiana Chiasserini, Giuseppe Di Giacomo

In the mobile-edge-cloud continuum, a plethora of heterogeneous data sources and computation-capable nodes are available.

Model Selection

Flexible Parallel Learning in Edge Scenarios: Communication, Computational and Energy Cost

no code implementations19 Jan 2022 Francesco Malandrino, Carla Fabiana Chiasserini

Traditionally, distributed machine learning takes the guise of (i) different nodes training the same model (as in federated learning), or (ii) one model being split among multiple nodes (as in distributed stochastic gradient descent).

Federated Learning

Dynamic VNF Placement, Resource Allocation and Traffic Routing in 5G

no code implementations18 Feb 2021 Morteza Golkarifard, Carla Fabiana Chiasserini, Francesco Malandrino, Ali Movaghar

5G networks are going to support a variety of vertical services, with a diverse set of key performance indicators (KPIs), by using enabling technologies such as software-defined networking and network function virtualization.

Networking and Internet Architecture

Network Support for High-performance Distributed Machine Learning

no code implementations5 Feb 2021 Francesco Malandrino, Carla Fabiana Chiasserini, Nuria Molner, Antonio de la Oliva

We then formulate the problem of selecting (i) which learning and information nodes should cooperate to complete the learning task, and (ii) the number of iterations to perform, in order to minimize the learning cost while meeting the target prediction error and execution time.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.