no code implementations • 11 Feb 2020 • Zhanhong Tan, Jiebo Song, Xiaolong Ma, Sia-Huat Tan, Hongyang Chen, Yuanqing Miao, Yi-Fu Wu, Shaokai Ye, Yanzhi Wang, Dehui Li, Kaisheng Ma
Weight pruning is a powerful technique to realize model compression.
1 code implementation • CVPR 2020 • Shaokai Ye, Kailu Wu, Mu Zhou, Yunfei Yang, Sia Huat Tan, Kaidi Xu, Jiebo Song, Chenglong Bao, Kaisheng Ma
Existing domain adaptation methods aim at learning features that can be generalized among domains.
Ranked #3 on Domain Adaptation on USPS-to-MNIST
1 code implementation • NeurIPS 2019 • Linfeng Zhang, Zhanhong Tan, Jiebo Song, Jingwei Chen, Chenglong Bao, Kaisheng Ma
Remarkable achievements have been attained by deep neural networks in various applications.
1 code implementation • ICCV 2019 • Linfeng Zhang, Jiebo Song, Anni Gao, Jingwei Chen, Chenglong Bao, Kaisheng Ma
Different from traditional knowledge distillation - a knowledge transformation methodology among networks, which forces student neural networks to approximate the softmax layer outputs of pre-trained teacher neural networks, the proposed self distillation framework distills knowledge within network itself.