Search Results for author: Marco Canini

Found 15 papers, 7 papers with code

Practical Insights into Knowledge Distillation for Pre-Trained Models

no code implementations22 Feb 2024 Norah Alballa, Marco Canini

This research investigates the enhancement of knowledge distillation (KD) processes in pre-trained models, an emerging field in knowledge transfer with significant implications for distributed training and federated learning environments.

Federated Learning Knowledge Distillation +1

Flashback: Understanding and Mitigating Forgetting in Federated Learning

no code implementations8 Feb 2024 Mohammed Aljahdali, Ahmed M. Abdelmoniem, Marco Canini, Samuel Horváth

In Federated Learning (FL), forgetting, or the loss of knowledge across rounds, hampers algorithm convergence, particularly in the presence of severe data heterogeneity among clients.

Federated Learning

Global-QSGD: Practical Floatless Quantization for Distributed Learning with Theoretical Guarantees

no code implementations29 May 2023 Jihao Xin, Marco Canini, Peter Richtárik, Samuel Horváth

To obtain theoretical guarantees, we generalize the notion of standard unbiased compression operators to incorporate Global-QSGD.

Quantization

FilFL: Client Filtering for Optimized Client Participation in Federated Learning

no code implementations13 Feb 2023 Fares Fourati, Salma Kharrat, Vaneet Aggarwal, Mohamed-Slim Alouini, Marco Canini

Federated learning is an emerging machine learning paradigm that enables clients to train collaboratively without exchanging local data.

Federated Learning

Resource-Efficient Federated Learning

1 code implementation1 Nov 2021 Ahmed M. Abdelmoniem, Atal Narayan Sahu, Marco Canini, Suhaib A. Fahmy

Federated Learning (FL) enables distributed training by learners using local data, thereby enhancing privacy and reducing communication.

Fairness Federated Learning

Rethinking gradient sparsification as total error minimization

no code implementations NeurIPS 2021 Atal Narayan Sahu, Aritra Dutta, Ahmed M. Abdelmoniem, Trambak Banerjee, Marco Canini, Panos Kalnis

Unlike with Top-$k$ sparsifier, we show that hard-threshold has the same asymptotic convergence and linear speedup property as SGD in the convex case and has no impact on the data-heterogeneity in the non-convex case.

On the Impact of Device and Behavioral Heterogeneity in Federated Learning

no code implementations15 Feb 2021 Ahmed M. Abdelmoniem, Chen-Yu Ho, Pantelis Papageorgiou, Muhammad Bilal, Marco Canini

Federated learning (FL) is becoming a popular paradigm for collaborative learning over distributed, private datasets owned by non-trusting entities.

Fairness Federated Learning

An Efficient Statistical-based Gradient Compression Technique for Distributed Training Systems

1 code implementation26 Jan 2021 Ahmed M. Abdelmoniem, Ahmed Elzanaty, Mohamed-Slim Alouini, Marco Canini

Many proposals exploit the compressibility of the gradients and propose lossy compression techniques to speed up the communication stage of distributed training.

On the Discrepancy between the Theoretical Analysis and Practical Implementations of Compressed Communication for Distributed Deep Learning

1 code implementation19 Nov 2019 Aritra Dutta, El Houcine Bergou, Ahmed M. Abdelmoniem, Chen-Yu Ho, Atal Narayan Sahu, Marco Canini, Panos Kalnis

Compressed communication, in the form of sparsification or quantization of stochastic gradients, is employed to reduce communication costs in distributed data-parallel training of deep neural networks.

Model Compression Quantization

Direct Nonlinear Acceleration

1 code implementation28 May 2019 Aritra Dutta, El Houcine Bergou, Yunming Xiao, Marco Canini, Peter Richtárik

In contrast to RNA which computes extrapolation coefficients by (approximately) setting the gradient of the objective function to zero at the extrapolated point, we propose a more direct approach, which we call direct nonlinear acceleration (DNA).

Natural Compression for Distributed Deep Learning

no code implementations27 May 2019 Samuel Horvath, Chen-Yu Ho, Ludovit Horvath, Atal Narayan Sahu, Marco Canini, Peter Richtarik

Our technique is applied individually to all entries of the to-be-compressed update vector and works by randomized rounding to the nearest (negative or positive) power of two, which can be computed in a "natural" way by ignoring the mantissa.

Quantization

Renaissance: A Self-Stabilizing Distributed SDN Control Plane using In-band Communications

1 code implementation20 Dec 2017 Marco Canini, Iosif Salem, Liron Schiff, Elad Michael Schiller, Stefan Schmid

By introducing programmability, automated verification, and innovative debugging tools, Software-Defined Networks (SDNs) are poised to meet the increasingly stringent dependability requirements of today's communication networks.

Networking and Internet Architecture Distributed, Parallel, and Cluster Computing Data Structures and Algorithms

Cannot find the paper you are looking for? You can Submit a new open access paper.