no code implementations • 8 Dec 2020 • Sung-En Chang, Yanyu Li, Mengshu Sun, Runbin Shi, Hayden K. -H. So, Xuehai Qian, Yanzhi Wang, Xue Lin
Unlike existing methods that use the same quantization scheme for all weights, we propose the first solution that applies different quantization schemes for different rows of the weight matrix.
1 code implementation • 28 Sep 2020 • Maolin Wang, Seyedramin Rasoulinezhad, Philip H. W. Leong, Hayden K. -H. So
While integer arithmetic has been widely adopted for improved performance in deep quantized neural network inference, training remains a task primarily executed using floating point arithmetic.
1 code implementation • ICLR 2020 • Junjie Liu, Zhe Xu, Runbin Shi, Ray C. C. Cheung, Hayden K. -H. So
We present a novel network pruning algorithm called Dynamic Sparse Training that can jointly find the optimal network parameters and sparse network structure in a unified optimization process with trainable pruning thresholds.
1 code implementation • 3 Oct 2019 • Nan Meng, Hayden K. -H. So, Xing Sun, Edmund Y. Lam
We consider the problem of high-dimensional light field reconstruction and develop a learning-based framework for spatial and angular super-resolution.
no code implementations • 24 May 2016 • Xing Sun, Nelson H. C. Yung, Edmund Y. Lam, Hayden K. -H. So
This technical report proves components consistency for the Doubly Stochastic Dirichlet Process with exponential convergence of posterior probability.