no code implementations • ICCV 2023 • Zhipeng Yu, Jiaheng Liu, Haoyu Qin, Yichao Wu, Kun Hu, Jiayi Tian, Ding Liang
Knowledge distillation is an effective model compression method to improve the performance of a lightweight student model by transferring the knowledge of a well-performed teacher model, which has been widely adopted in many computer vision tasks, including face recognition (FR).
no code implementations • 17 Nov 2022 • Jiaheng Liu, Tong He, Honghui Yang, Rui Su, Jiayi Tian, Junran Wu, Hongcheng Guo, Ke Xu, Wanli Ouyang
Previous top-performing methods for 3D instance segmentation often maintain inter-task dependencies and the tendency towards a lack of robustness.
1 code implementation • 28 Oct 2022 • Jiayi Tian, Chao Fang, Haonan Wang, Zhongfeng Wang
Pre-trained BERT models have achieved impressive accuracy on natural language processing (NLP) tasks.
no code implementations • 26 Mar 2021 • Jiayi Tian, Jing Zhang, Wen Li, Dong Xu
On the other hand, we also design an effective distribution alignment method to reduce the distribution divergence between the virtual domain and the target domain by gradually improving the compactness of the target domain distribution through model learning.