Search Results for author: Fangcheng Fu

Found 3 papers, 0 papers with code

Don't Waste Your Bits! Squeeze Activations and Gradients for Deep Neural Networks via TinyScript

no code implementations ICML 2020 Fangcheng Fu, Yuzheng Hu, Yihan He, Jiawei Jiang, Yingxia Shao, Ce Zhang, Bin Cui

Recent years have witnessed intensive research interests on training deep neural networks (DNNs) more efficiently by quantization-based compression methods, which facilitate DNNs training in two ways: (1) activations are quantized to shrink the memory consumption, and (2) gradients are quantized to decrease the communication cost.

Quantization

K-Core Decomposition on Super Large Graphs with Limited Resources

no code implementations26 Dec 2021 Shicheng Gao, Jie Xu, Xiaosen Li, Fangcheng Fu, Wentao Zhang, Wen Ouyang, Yangyu Tao, Bin Cui

For example, the distributed K-core decomposition algorithm can scale to a large graph with 136 billion edges without losing correctness with our divide-and-conquer technique.

An Experimental Evaluation of Large Scale GBDT Systems

no code implementations3 Jul 2019 Fangcheng Fu, Jiawei Jiang, Yingxia Shao, Bin Cui

Gradient boosting decision tree (GBDT) is a widely-used machine learning algorithm in both data analytic competitions and real-world industrial applications.

Cannot find the paper you are looking for? You can Submit a new open access paper.