no code implementations • ICML 2020 • Ning Xu, Yun-Peng Liu, Jun Shu, Xin Geng
Label distribution covers a certain number of labels, representing the degree to which each label describes the instance.
no code implementations • 5 Mar 2024 • Chenqiang Gao, Chuandong Liu, Jun Shu, Fangcen Liu, Jiang Liu, Luyu Yang, Xinbo Gao, Deyu Meng
Current state-of-the-art (SOTA) 3D object detection methods often require a large amount of 3D bounding box annotations for training.
no code implementations • 13 Aug 2023 • Yongheng Sun, Fan Wang, Jun Shu, Haifeng Wang, Li Wang. Deyu Meng, Chunfeng Lian
However, segmentation on longitudinal data is challenging due to dynamic brain changes across the lifespan.
1 code implementation • 13 May 2023 • Jun Shu, Xiang Yuan, Deyu Meng, Zongben Xu
Besides, meta-data-driven meta-loss objective combined with DAC-MR is capable of achieving better meta-level generalization.
no code implementations • 18 Jan 2023 • Kehui Ding, Jun Shu, Deyu Meng, Zongben Xu
To achieve setting such instance-dependent hyperparameters for robust loss, we propose a meta-learning method capable of adaptively learning a hyperparameter prediction function, called Noise-Aware-Robust-Loss-Adjuster (NARL-Adjuster).
no code implementations • ICCV 2023 • Yongheng Sun, Fan Wang, Jun Shu, Haifeng Wang, Li Wang, Deyu Meng, Chunfeng Lian
However, segmentation on longitudinal data is challenging due to dynamic brain changes across the lifespan.
no code implementations • 9 Dec 2022 • Xiangyu Rui, Xiangyong Cao, Jun Shu, Qian Zhao, Deyu Meng
Extensive experiments verify that the proposed HWnet can help improve the generalization ability of a weighted model to adapt to more complex noise, and can also strengthen the weighted model by transferring the knowledge from another weighted model.
no code implementations • 16 Feb 2022 • Minghao Zhou, Quanziang Wang, Jun Shu, Qian Zhao, Deyu Meng
Extensive researches have applied deep neural networks (DNNs) in class incremental learning (Class-IL).
1 code implementation • 11 Feb 2022 • Jun Shu, Xiang Yuan, Deyu Meng, Zongben Xu
Specifically, by seeing each training class as a separate learning task, our method aims to extract an explicit weighting function with sample loss and task/class feature as input, and sample weight as output, expecting to impose adaptively varying weighting schemes to different sample classes based on their own intrinsic bias characteristics.
Ranked #3 on Image Classification on WebVision-1000
1 code implementation • 6 Jul 2021 • Jun Shu, Deyu Meng, Zongben Xu
Meta learning has attracted much attention recently in machine learning community.
no code implementations • 2 Sep 2020 • Ziyi Yang, Jun Shu, Yong Liang, Deyu Meng, Zongben Xu
Current machine learning has made great progress on computer vision and many other fields attributed to the large amount of high-quality training samples, while it does not work very well on genomic data analysis, since they are notoriously known as small data.
no code implementations • 8 Aug 2020 • Renzhen Wang, Kaiqin Hu, Yanwen Zhu, Jun Shu, Qian Zhao, Deyu Meng
We further design a modulator network to guide the generation of the modulation parameters, and such a meta-learner can be readily adapted to train the classification network on other long-tailed datasets.
1 code implementation • 3 Aug 2020 • Yichen Wu, Jun Shu, Qi Xie, Qian Zhao, Deyu Meng
By viewing the label correction procedure as a meta-process and using a meta-learner to automatically correct labels, we could adaptively obtain rectified soft labels iteratively according to current training problems without manually preset hyper-parameters.
no code implementations • 29 Jul 2020 • Jun Shu, Yanwen Zhu, Qian Zhao, Zongben Xu, Deyu Meng
Meanwhile, it always needs to search proper LR schedules from scratch for new tasks, which, however, are often largely different with task variations, like data modalities, network architectures, or training data capacities.
no code implementations • 10 Jun 2020 • Jun Shu, Qian Zhao, Zongben Xu, Deyu Meng
To discover intrinsic inter-class transition probabilities underlying data, learning with noise transition has become an important approach for robust deep learning on corrupted labels.
no code implementations • 16 Feb 2020 • Jun Shu, Qian Zhao, Keyu Chen, Zongben Xu, Deyu Meng
Four kinds of SOTA robust loss functions are attempted to be integrated into our algorithm, and comprehensive experiments substantiate the general availability and effectiveness of the proposed method in both its accuracy and generalization performance, as compared with conventional hyperparameter tuning strategy, even with carefully tuned hyperparameters.
3 code implementations • NeurIPS 2019 • Jun Shu, Qi Xie, Lixuan Yi, Qian Zhao, Sanping Zhou, Zongben Xu, Deyu Meng
Current deep neural networks (DNNs) can easily overfit to biased training data with corrupted labels or class imbalance.
Ranked #24 on Image Classification on Clothing1M (using extra training data)
no code implementations • 14 Aug 2018 • Jun Shu, Zongben Xu, Deyu Meng
This category mainly focuses on learning with insufficient samples, and can also be called small data learning in some literatures.