no code implementations • 15 Jul 2024 • Daichi Zhang, Zihao Xiao, Shikun Li, Fanzhao Lin, Jianmin Li, Shiming Ge
To this end, we propose to learn the Natural Consistency representation (NACO) of real face videos in a self-supervised manner, which is inspired by the observation that fake videos struggle to maintain the natural spatiotemporal consistency even under unknown forgery methods and different perturbations.
1 code implementation • 3 Jun 2024 • Hansong Zhang, Shikun Li, Fanzhao Lin, Weiping Wang, Zhenxing Qian, Shiming Ge
Specifically, from the inner-class view, we construct multiple "middle encoders" to perform pseudo long-term distribution alignment, making the condensed set a good proxy of the real one during the whole training process; while from the inter-class view, we use the expert models to perform distribution calibration, ensuring the synthetic data remains in the real class region during condensing.
2 code implementations • 26 Dec 2023 • Hansong Zhang, Shikun Li, Pengju Wang, Dan Zeng, Shiming Ge
Nowadays, optimization-oriented methods have been the primary method in the field of dataset condensation for achieving SOTA results.
2 code implementations • 12 Dec 2023 • Hansong Zhang, Shikun Li, Dan Zeng, Chenggang Yan, Shiming Ge
Moreover, we cluster the ``annotator groups'' who share similar expertise so that their confusion matrices could be corrected together.
1 code implementation • 22 Sep 2023 • Shikun Li, Xiaobo Xia, Hansong Zhang, Shiming Ge, Tongliang Liu
However, estimating multi-label noise transition matrices remains a challenging task, as most existing estimators in noisy multi-class learning rely on anchor points and accurate fitting of noisy class posteriors, which is hard to satisfy in noisy multi-label learning.
1 code implementation • 5 Jun 2023 • Shikun Li, Xiaobo Xia, Jiankang Deng, Shiming Ge, Tongliang Liu
In real-world crowd-sourcing scenarios, noise transition matrices are both annotator- and instance-dependent.
1 code implementation • 8 Mar 2022 • Shikun Li, Tongliang Liu, Jiyong Tan, Dan Zeng, Shiming Ge
This raises the following important question: how can we effectively use a small amount of trusted data to facilitate robust classifier learning from multiple annotators?
1 code implementation • CVPR 2022 • Shikun Li, Xiaobo Xia, Shiming Ge, Tongliang Liu
In the selection process, by measuring the agreement between learned representations and given labels, we first identify confident examples that are exploited to build confident pairs.
Ranked #11 on Image Classification on mini WebVision 1.0
no code implementations • 23 Mar 2021 • Kangkai Zhang, Chunhui Zhang, Shikun Li, Dan Zeng, Shiming Ge
Inspired by that, we propose an evolutionary knowledge distillation approach to improve the transfer effectiveness of teacher knowledge.