Search Results for author: Xili Wan

Found 5 papers, 1 papers with code

DMKD: Improving Feature-based Knowledge Distillation for Object Detection Via Dual Masking Augmentation

no code implementations6 Sep 2023 Guang Yang, Yin Tang, Zhijian Wu, Jun Li, Jianhua Xu, Xili Wan

Recent mainstream masked distillation methods function by reconstructing selectively masked areas of a student network from the feature map of its teacher counterpart.

Knowledge Distillation object-detection +1

AMD: Adaptive Masked Distillation for Object Detection

no code implementations31 Jan 2023 Guang Yang, Yin Tang, Jun Li, Jianhua Xu, Xili Wan

As a general model compression paradigm, feature-based knowledge distillation allows the student model to learn expressive features from the teacher counterpart.

Knowledge Distillation Model Compression +3

CAIBC: Capturing All-round Information Beyond Color for Text-based Person Retrieval

no code implementations13 Sep 2022 Zijie Wang, Aichun Zhu, Jingyi Xue, Xili Wan, Chao Liu, Tian Wang, Yifeng Li

Indeed, color information is an important decision-making accordance for retrieval, but the over-reliance on color would distract the model from other key clues (e. g. texture information, structural information, etc.

Decision Making Person Retrieval +3

DSSL: Deep Surroundings-person Separation Learning for Text-based Person Retrieval

1 code implementation12 Sep 2021 Aichun Zhu, Zijie Wang, Yifeng Li, Xili Wan, Jing Jin, Tian Wang, Fangqiang Hu, Gang Hua

Many previous methods on text-based person retrieval tasks are devoted to learning a latent common space mapping, with the purpose of extracting modality-invariant features from both visual and textual modality.

Person Retrieval Retrieval +2

Cannot find the paper you are looking for? You can Submit a new open access paper.