1 code implementation • 27 May 2022 • Siyuan Li, Di wu, Fang Wu, Zelin Zang, Kai Wang, Lei Shang, Baigui Sun, Hao Li, Stan. Z. Li
We observe that MIM essentially teaches the model to learn better middle-level interactions among patches and extract more generalized features.
1 code implementation • 3 Dec 2021 • Shiming Chen, Ziming Hong, Yang Liu, Guo-Sen Xie, Baigui Sun, Hao Li, Qinmu Peng, Ke Lu, Xinge You
Although some attention-based models have attempted to learn such region features in a single image, the transferability and discriminative attribute localization of visual features are typically neglected.
1 code implementation • NeurIPS 2021 • Shiming Chen, Guo-Sen Xie, Yang Liu, Qinmu Peng, Baigui Sun, Hao Li, Xinge You, Ling Shao
Specifically, HSVA aligns the semantic and visual domains by adopting a hierarchical two-step adaptation, i. e., structure adaptation and distribution adaptation.
no code implementations • 29 Sep 2021 • Yang Liu, Zhipeng Zhou, Lei Shang, Baigui Sun, Hao Li, Rong Jin
Unsupervised domain adaptation (UDA) aims to transfer the knowledge from a labeled source domain to an unlabeled target domain.
no code implementations • 1 Sep 2021 • Yi Xu, Lei Shang, Jinxing Ye, Qi Qian, Yu-Feng Li, Baigui Sun, Hao Li, Rong Jin
In this work we develop a simple yet powerful framework, whose key idea is to select a subset of training examples from the unlabeled data when performing existing SSL methods so that only the unlabeled examples with pseudo labels related to the labeled data will be used to train models.
1 code implementation • ICCV 2021 • Hongbin Xu, Zhipeng Zhou, Yali Wang, Wenxiong Kang, Baigui Sun, Hao Li, Yu Qiao
Specially, the limitations can be categorized into two types: ambiguious supervision in foreground and invalid supervision in background.
no code implementations • 30 Jun 2021 • Di wu, Siyuan Li, Zelin Zang, Kai Wang, Lei Shang, Baigui Sun, Hao Li, Stan Z. Li
In this paper, we first point out that current contrastive methods are prone to memorizing background/foreground texture and therefore have a limitation in localizing the foreground object.
1 code implementation • CVPR 2022 • Kai Wang, Shuo Wang, Panpan Zhang, Zhipeng Zhou, Zheng Zhu, Xiaobo Wang, Xiaojiang Peng, Baigui Sun, Hao Li, Yang You
This method adopts Dynamic Class Pool (DCP) for storing and updating the identities features dynamically, which could be regarded as a substitute for the FC layer.
Ranked #1 on
Face Verification
on IJB-C
(training dataset metric)
no code implementations • 23 Apr 2021 • Jinxing Ye, Xioajiang Peng, Baigui Sun, Kai Wang, Xiuyu Sun, Hao Li, Hanqing Wu
In this paper, we repurpose the well-known Transformer and introduce a Face Transformer for supervised face clustering.
2 code implementations • CVPR 2022 • Yang Liu, Fei Wang, Jiankang Deng, Zhipeng Zhou, Baigui Sun, Hao Li
As a result, practical solutions on label assignment, scale-level data augmentation, and reducing false alarms are necessary for advancing face detectors.
no code implementations • 18 Dec 2020 • Kai Wang, Yuxin Gu, Xiaojiang Peng, Panpan Zhang, Baigui Sun, Hao Li
The domain diversities including inconsistent annotation and varied image collection conditions inevitably exist among different facial expression recognition (FER) datasets, which pose an evident challenge for adapting the FER model trained on one dataset to another one.
5 code implementations • ICCV 2019 • Qi Qian, Lei Shang, Baigui Sun, Juhua Hu, Hao Li, Rong Jin
The set of triplet constraints has to be sampled within the mini-batch.
Ranked #17 on
Metric Learning
on CUB-200-2011
(using extra training data)
no code implementations • 19 May 2018 • Qi Qian, Shenghuo Zhu, Jiasheng Tang, Rong Jin, Baigui Sun, Hao Li
Hence, we propose to learn the model and the adversarial distribution simultaneously with the stochastic algorithm for efficiency.
no code implementations • 20 Sep 2016 • Junxuan Chen, Baigui Sun, Hao Li, Hongtao Lu, Xian-Sheng Hua
Click through rate (CTR) prediction of image ads is the core task of online display advertising systems, and logistic regression (LR) has been frequently applied as the prediction model.