1 code implementation • 28 Jul 2022 • Zhaoyang Du, Yijin Guan, Tianchan Guan, Dimin Niu, Nianxiong Tan, Xiaopeng Yu, Hongzhong Zheng, Jianyi Meng, Xiaolang Yan, Yuan Xie
We also propose a reference design of the existing sampling-based method with optimized computing overheads to demonstrate the better accuracy of the proposed method.
no code implementations • ICLR 2018 • Tianchan Guan, Xiaoyang Zeng, Mingoo Seok
With the same amount of data storage, our model can train a bigger network having more weights, achieving 1% less test error than the conventional binary neural network learning model.
no code implementations • 15 Sep 2017 • Tianchan Guan, Xiaoyang Zeng, Mingoo Seok
This enables a device with a given storage constraint to train and instantiate a neural network classifier with a larger number of weights on a chip and with a less number of off-chip storage accesses.