2 code implementations • 12 Jul 2022 • Chenxin Li, Mingbao Lin, Zhiyuan Ding, Nie Lin, Yihong Zhuang, Yue Huang, Xinghao Ding, Liujuan Cao
Knowledge Distillation (KD) transfers the knowledge from a high-capacity teacher network to strengthen a smaller student.
1 code implementation • 29 Mar 2022 • Yunlong Zhang, Xin Lin, Yihong Zhuang, LiyanSun, Yue Huang, Xinghao Ding, Guisheng Wang, Lin Yang, Yizhou Yu
Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods.
no code implementations • 30 Nov 2020 • Huangxing Lin, Yihong Zhuang, Yue Huang, Xinghao Ding, Yizhou Yu, Xiaoqing Liu, John Paisley
Coupling the noisy data output from ADANI with the corresponding ground-truth, a denoising CNN is then trained in a fully-supervised manner.
no code implementations • 1 Nov 2021 • Huangxing Lin, Yihong Zhuang, Delu Zeng, Yue Huang, Xinghao Ding, John Paisley
Specifically, we treat the output of the network as a ``prior'' that we denoise again after ``re-noising''.
no code implementations • 22 Apr 2022 • Changxing Jing, Yan Huang, Yihong Zhuang, Liyan Sun, Yue Huang, Zhenlong Xiao, Xinghao Ding
This paper shows that it is possible to achieve flexible personalization after the convergence of the global model by introducing representation learning.