no code implementations • 6 Sep 2023 • Guang Yang, Yin Tang, Zhijian Wu, Jun Li, Jianhua Xu, Xili Wan
Recent mainstream masked distillation methods function by reconstructing selectively masked areas of a student network from the feature map of its teacher counterpart.
no code implementations • 31 Jan 2023 • Guang Yang, Yin Tang, Jun Li, Jianhua Xu, Xili Wan
As a general model compression paradigm, feature-based knowledge distillation allows the student model to learn expressive features from the teacher counterpart.
no code implementations • 13 Sep 2022 • Zijie Wang, Aichun Zhu, Jingyi Xue, Xili Wan, Chao Liu, Tian Wang, Yifeng Li
We evaluate our proposed method on two text-based person retrieval datasets CUHK-PEDES and RSTPReid.
no code implementations • 13 Sep 2022 • Zijie Wang, Aichun Zhu, Jingyi Xue, Xili Wan, Chao Liu, Tian Wang, Yifeng Li
Indeed, color information is an important decision-making accordance for retrieval, but the over-reliance on color would distract the model from other key clues (e. g. texture information, structural information, etc.
1 code implementation • 12 Sep 2021 • Aichun Zhu, Zijie Wang, Yifeng Li, Xili Wan, Jing Jin, Tian Wang, Fangqiang Hu, Gang Hua
Many previous methods on text-based person retrieval tasks are devoted to learning a latent common space mapping, with the purpose of extracting modality-invariant features from both visual and textual modality.
Ranked #8 on Text based Person Retrieval on RSTPReid