Search Results for author: Eduin E. Hernandez

Found 4 papers, 3 papers with code

How to Attain Communication-Efficient DNN Training? Convert, Compress, Correct

no code implementations18 Apr 2022 Zhong-Jing Chen, Eduin E. Hernandez, Yu-Chih Huang, Stefano Rini

Namely: (i) gradient quantization through floating-point conversion, (ii) lossless compression of the quantized gradient, and (iii) quantization error correction.

Quantization

Convert, compress, correct: Three steps toward communication-efficient DNN training

1 code implementation17 Mar 2022 Zhong-Jing Chen, Eduin E. Hernandez, Yu-Chih Huang, Stefano Rini

In this paper, we introduce a novel algorithm, $\mathsf{CO}_3$, for communication-efficiency distributed Deep Neural Network (DNN) training.

Quantization

DNN gradient lossless compression: Can GenNorm be the answer?

1 code implementation15 Nov 2021 Zhong-Jing Chen, Eduin E. Hernandez, Yu-Chih Huang, Stefano Rini

In this paper we argue that, for some networks of practical interest, the gradient entries can be well modelled as having a generalized normal (GenNorm) distribution.

Federated Learning

Speeding-Up Back-Propagation in DNN: Approximate Outer Product with Memory

1 code implementation18 Oct 2021 Eduin E. Hernandez, Stefano Rini, Tolga M. Duman

In order to correct for the inherent bias in this approximation, the algorithm retains in memory an accumulation of the outer products that are not used in the approximation.

Cannot find the paper you are looking for? You can Submit a new open access paper.