no code implementations • 29 May 2019 • Fu-Chieh Chang, Hao-Jen Wang, Chun-Nan Chou, Edward Y. Chang
Performing supervised learning from the data synthesized by using Generative Adversarial Networks (GANs), dubbed GAN-synthetic data, has two important applications.
no code implementations • 12 Apr 2019 • Chun-Hsien Yu, Chun-Nan Chou, Emily Chang
Deep Learning techniques have achieved remarkable results in many domains.
no code implementations • CVPR 2019 • Yu-Hsun Lin, Chun-Nan Chou, Edward Y. Chang
In this paper we propose the macroblock scaling (MBS) algorithm, which can be applied to various CNN architectures to reduce their model size.
no code implementations • 16 Jul 2018 • Yu-Hsun Lin, Chun-Nan Chou, Edward Y. Chang
This paper proposes BRIEF, a backward reduction algorithm that explores compact CNN-model designs from the information flow perspective.
no code implementations • 19 Feb 2018 • Sheng-Wei Chen, Chun-Nan Chou, Edward Y. Chang
For training fully-connected neural networks (FCNNs), we propose a practical approximate second-order method including: 1) an approximation of the Hessian matrix and 2) a conjugate gradient (CG) based method.
no code implementations • 10 Aug 2017 • Shang-Xuan Zou, Chun-Yen Chen, Jui-Lin Wu, Chun-Nan Chou, Chia-Chin Tsao, Kuan-Chieh Tung, Ting-Wei Lin, Cheng-Lung Sung, Edward Y. Chang
Scale of data and scale of computation infrastructures together enable the current deep learning renaissance.
no code implementations • 25 Jul 2017 • Chun-Nan Chou, Chuen-Kai Shie, Fu-Chieh Chang, Jocelyn Chang, Edward Y. Chang
Deep learning owes its success to three key factors: scale of data, enhanced models to learn representations from data, and scale of computation.
no code implementations • CVPR 2017 • Che-Han Chang, Chun-Nan Chou, Edward Y. Chang
The main component of this architecture is a Lucas-Kanade layer that performs the inverse compositional algorithm on convolutional feature maps.