1 code implementation • 24 Dec 2024 • Yan Zhang, Guoqiang Wu, Bingzheng Wang, Teng Pang, Haoliang Sun, Yilong Yin
To fill this gap, in this paper, we propose a new memory replay-based method to tackle the imbalance issue for Macro-AUC-oriented MLCL.
1 code implementation • 25 May 2024 • Qikai Wang, Rundong He, Yongshun Gong, Chunxiao Ren, Haoliang Sun, Xiaoshui Huang, Yilong Yin
Semi-supervised learning can significantly boost model performance by leveraging unlabeled data, particularly when labeled data is scarce.
1 code implementation • 14 Jun 2023 • Ren Wang, Haoliang Sun, Qi Wei, Xiushan Nie, Yuling Ma, Yilong Yin
The key idea is to first break the rote memories by network pruning to address memorization overfitting in the inner loop, and then the gradients of pruned sub-networks naturally form the high-quality augmentation of the meta-gradient to alleviate learner overfitting in the outer loop.
no code implementations • CVPR 2023 • Ren Wang, Haoliang Sun, Yuling Ma, Xiaoming Xi, Yilong Yin
To overcome them, we propose a novel bi-level-optimization-based multi-view learning framework, where the representation is learned in a uniform-to-specific manner.
no code implementations • CVPR 2023 • Qi Wei, Lei Feng, Haoliang Sun, Ren Wang, Chenhui Guo, Yilong Yin
To this end, we propose a novel framework called stochastic noise-tolerated supervised contrastive learning (SNSCL) that confronts label noise by encouraging distinguishable representation.
Ranked #3 on
Learning with noisy labels
on Food-101
1 code implementation • 24 Aug 2022 • Qi Wei, Haoliang Sun, Xiankai Lu, Yilong Yin
Sample selection is an effective strategy to mitigate the effect of label noise in robust learning.
1 code implementation • 8 Nov 2021 • Haoliang Sun, Chenhui Guo, Qi Wei, Zhongyi Han, Yilong Yin
In this paper, we propose warped probabilistic inference (WarPI) to achieve adaptively rectifying the training procedure for the classification network within the meta-learning scenario.
1 code implementation • 13 Aug 2021 • Zhongyi Han, Haoliang Sun, Yilong Yin
However, the learning processes of domain-invariant features and source hypothesis inevitably involve domain-specific information that would degrade the generalizability of UDA models on the target domain.
1 code implementation • 14 May 2021 • Haoliang Sun, Xiankai Lu, Haochen Wang, Yilong Yin, XianTong Zhen, Cees G. M. Snoek, Ling Shao
We define a global latent variable to represent the prototype of each object category, which we model as a probabilistic distribution.
1 code implementation • 8 May 2021 • Yingjun Du, Haoliang Sun, XianTong Zhen, Jun Xu, Yilong Yin, Ling Shao, Cees G. M. Snoek
Specifically, we propose learning variational random features in a data-driven manner to obtain task-specific kernels by leveraging the shared knowledge provided by related tasks in a meta-learning setting.
no code implementations • 23 Dec 2020 • Haoliang Sun, XianTong Zhen, Chris Bailey, Parham Rasoulinejad, Yilong Yin, Shuo Li
The Cobb angle that quantitatively evaluates the spinal curvature plays an important role in the scoliosis diagnosis and treatment.
2 code implementations • ICML 2020 • Xiantong Zhen, Haoliang Sun, Ying-Jun Du, Jun Xu, Yilong Yin, Ling Shao, Cees Snoek
We propose meta variational random features (MetaVRF) to learn adaptive kernels for the base-learner, which is developed in a latent variable model by treating the random feature basis as the latent variable.
1 code implementation • ICCV 2019 • Haoliang Sun, Ronak Mehta, Hao H. Zhou, Zhichun Huang, Sterling C. Johnson, Vivek Prabhakaran, Vikas Singh
Motivated by developments in modality transfer in vision, we study the generation of certain types of PET images from MRI data.
no code implementations • CVPR 2017 • Haoliang Sun, Xian-Tong Zhen, Yuanjie Zheng, Gongping Yang, Yilong Yin, Shuo Li
Image-set classification has recently generated great popularity due to its widespread applications in computer vision.