Search Results for author: Francesco Malandrino

Found 9 papers, 0 papers with code

Combining Relevance and Magnitude for Resource-Aware DNN Pruning

no code implementations21 May 2024 Carla Fabiana Chiasserini, Francesco Malandrino, Nuria Molner, Zhiqiang Zhao

Pruning neural networks, i. e., removing some of their parameters whilst retaining their accuracy, is one of the main ways to reduce the latency of a machine learning pipeline, especially in resource- and/or bandwidth-constrained scenarios.

Dependable Distributed Training of Compressed Machine Learning Models

no code implementations22 Feb 2024 Francesco Malandrino, Giuseppe Di Giacomo, Marco Levorato, Carla Fabiana Chiasserini

The existing work on the distributed training of machine learning (ML) models has consistently overlooked the distribution of the achieved learning quality, focusing instead on its average value.

Unexpectedly Useful: Convergence Bounds And Real-World Distributed Learning

no code implementations5 Dec 2022 Francesco Malandrino, Carla Fabiana Chiasserini

Convergence bounds are one of the main tools to obtain information on the performance of a distributed machine learning task, before running the task itself.

Federated Learning

Matching DNN Compression and Cooperative Training with Resources and Data Availability

no code implementations2 Dec 2022 Francesco Malandrino, Giuseppe Di Giacomo, Armin Karamzade, Marco Levorato, Carla Fabiana Chiasserini

To make machine learning (ML) sustainable and apt to run on the diverse devices where relevant data is, it is essential to compress ML models as needed, while still meeting the required learning quality and time performance.

Choose, not Hoard: Information-to-Model Matching for Artificial Intelligence in O-RAN

no code implementations1 Aug 2022 Jorge Martín-Pérez, Nuria Molner, Francesco Malandrino, Carlos Jesús Bernardos, Antonio de la Oliva, David Gomez-Barquero

Open Radio Access Network (O-RAN) is an emerging paradigm, whereby virtualized network infrastructure elements from different vendors communicate via open, standardized interfaces.

Efficient Distributed DNNs in the Mobile-edge-cloud Continuum

no code implementations23 Feb 2022 Francesco Malandrino, Carla Fabiana Chiasserini, Giuseppe Di Giacomo

In the mobile-edge-cloud continuum, a plethora of heterogeneous data sources and computation-capable nodes are available.

Model Selection

Flexible Parallel Learning in Edge Scenarios: Communication, Computational and Energy Cost

no code implementations19 Jan 2022 Francesco Malandrino, Carla Fabiana Chiasserini

Traditionally, distributed machine learning takes the guise of (i) different nodes training the same model (as in federated learning), or (ii) one model being split among multiple nodes (as in distributed stochastic gradient descent).

Federated Learning

Dynamic VNF Placement, Resource Allocation and Traffic Routing in 5G

no code implementations18 Feb 2021 Morteza Golkarifard, Carla Fabiana Chiasserini, Francesco Malandrino, Ali Movaghar

5G networks are going to support a variety of vertical services, with a diverse set of key performance indicators (KPIs), by using enabling technologies such as software-defined networking and network function virtualization.

Networking and Internet Architecture

Network Support for High-performance Distributed Machine Learning

no code implementations5 Feb 2021 Francesco Malandrino, Carla Fabiana Chiasserini, Nuria Molner, Antonio de la Oliva

We then formulate the problem of selecting (i) which learning and information nodes should cooperate to complete the learning task, and (ii) the number of iterations to perform, in order to minimize the learning cost while meeting the target prediction error and execution time.

BIG-bench Machine Learning Vocal Bursts Intensity Prediction

Cannot find the paper you are looking for? You can Submit a new open access paper.