Search Results for author: Zhong-Jing Chen

Found 3 papers, 2 papers with code

How to Attain Communication-Efficient DNN Training? Convert, Compress, Correct

no code implementations18 Apr 2022 Zhong-Jing Chen, Eduin E. Hernandez, Yu-Chih Huang, Stefano Rini

Namely: (i) gradient quantization through floating-point conversion, (ii) lossless compression of the quantized gradient, and (iii) quantization error correction.

Quantization

Convert, compress, correct: Three steps toward communication-efficient DNN training

1 code implementation17 Mar 2022 Zhong-Jing Chen, Eduin E. Hernandez, Yu-Chih Huang, Stefano Rini

In this paper, we introduce a novel algorithm, $\mathsf{CO}_3$, for communication-efficiency distributed Deep Neural Network (DNN) training.

Quantization

DNN gradient lossless compression: Can GenNorm be the answer?

1 code implementation15 Nov 2021 Zhong-Jing Chen, Eduin E. Hernandez, Yu-Chih Huang, Stefano Rini

In this paper we argue that, for some networks of practical interest, the gradient entries can be well modelled as having a generalized normal (GenNorm) distribution.

Federated Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.