3 code implementations • 26 Dec 2023 • Hansong Zhang, Shikun Li, Pengju Wang, Dan Zeng, Shiming Ge
Nowadays, optimization-oriented methods have been the primary method in the field of dataset condensation for achieving SOTA results.
2 code implementations • 12 Dec 2023 • Hansong Zhang, Shikun Li, Dan Zeng, Chenggang Yan, Shiming Ge
Moreover, we cluster the ``annotator groups'' who share similar expertise so that their confusion matrices could be corrected together.
1 code implementation • 22 Sep 2023 • Shikun Li, Xiaobo Xia, Hansong Zhang, Shiming Ge, Tongliang Liu
However, estimating multi-label noise transition matrices remains a challenging task, as most existing estimators in noisy multi-class learning rely on anchor points and accurate fitting of noisy class posteriors, which is hard to satisfy in noisy multi-label learning.
1 code implementation • 5 Jun 2023 • Shikun Li, Xiaobo Xia, Jiankang Deng, Shiming Ge, Tongliang Liu
In real-world crowd-sourcing scenarios, noise transition matrices are both annotator- and instance-dependent.
1 code implementation • 8 Mar 2022 • Shikun Li, Tongliang Liu, Jiyong Tan, Dan Zeng, Shiming Ge
This raises the following important question: how can we effectively use a small amount of trusted data to facilitate robust classifier learning from multiple annotators?
1 code implementation • CVPR 2022 • Shikun Li, Xiaobo Xia, Shiming Ge, Tongliang Liu
In the selection process, by measuring the agreement between learned representations and given labels, we first identify confident examples that are exploited to build confident pairs.
Ranked #11 on Image Classification on mini WebVision 1.0
no code implementations • 23 Mar 2021 • Kangkai Zhang, Chunhui Zhang, Shikun Li, Dan Zeng, Shiming Ge
Inspired by that, we propose an evolutionary knowledge distillation approach to improve the transfer effectiveness of teacher knowledge.