Search Results for author: Chen-Yu Ho

Found 4 papers, 2 papers with code

On the Impact of Device and Behavioral Heterogeneity in Federated Learning

no code implementations15 Feb 2021 Ahmed M. Abdelmoniem, Chen-Yu Ho, Pantelis Papageorgiou, Muhammad Bilal, Marco Canini

Federated learning (FL) is becoming a popular paradigm for collaborative learning over distributed, private datasets owned by non-trusting entities.

Fairness Federated Learning

On the Discrepancy between the Theoretical Analysis and Practical Implementations of Compressed Communication for Distributed Deep Learning

1 code implementation19 Nov 2019 Aritra Dutta, El Houcine Bergou, Ahmed M. Abdelmoniem, Chen-Yu Ho, Atal Narayan Sahu, Marco Canini, Panos Kalnis

Compressed communication, in the form of sparsification or quantization of stochastic gradients, is employed to reduce communication costs in distributed data-parallel training of deep neural networks.

Model Compression Quantization

Natural Compression for Distributed Deep Learning

no code implementations27 May 2019 Samuel Horvath, Chen-Yu Ho, Ludovit Horvath, Atal Narayan Sahu, Marco Canini, Peter Richtarik

Our technique is applied individually to all entries of the to-be-compressed update vector and works by randomized rounding to the nearest (negative or positive) power of two, which can be computed in a "natural" way by ignoring the mantissa.

Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.