no code implementations • 29 Oct 2024 • Yufei Zhang, Yicheng Xu, Hongxin Wei, Zhiping Lin, Huiping Zhuang
We innovatively introduce analytic learning into TTA, using the Analytic Classifiers (ACs) to prevent model forgetting.
no code implementations • 24 Oct 2024 • Hengxiang Zhang, Hongfu Gao, Qiang Hu, Guanhua Chen, Lili Yang, BingYi Jing, Hongxin Wei, Bing Wang, Haifeng Bai, Lei Yang
While previous works have introduced several benchmarks to evaluate the safety risk of LLMs, the community still has a limited understanding of current LLMs' capability to recognize illegal and unsafe content in Chinese contexts.
1 code implementation • 12 Oct 2024 • Kangdao Liu, Hao Zeng, Jianguo Huang, Huiping Zhuang, Chi-Man Vong, Hongxin Wei
Conformal prediction, as an emerging uncertainty quantification technique, typically functions as post-hoc processing for the outputs of trained classifiers.
no code implementations • 9 Oct 2024 • Hengxiang Zhang, Songxin Zhang, BingYi Jing, Hongxin Wei
In light of this, we introduce a novel and effective method termed Fine-tuned Score Deviation (FSD), which improves the performance of current scoring functions for pretraining data detection.
no code implementations • 9 Oct 2024 • Qiang Hu, Hengxiang Zhang, Hongxin Wei
Over-parameterized models are typically vulnerable to membership inference attacks, which aim to determine whether a specific sample is included in the training of a given model.
no code implementations • 3 Oct 2024 • Shuoyuan Wang, Yixuan Li, Hongxin Wei
In this work, we demonstrate that existing prompt tuning methods usually lead to a trade-off of calibration between base and new classes: the cross-entropy loss in CoOp causes overconfidence in new classes by increasing textual label divergence, whereas the regularization of KgCoOp maintains the confidence level but results in underconfidence in base classes due to the improved accuracy.
no code implementations • 21 Aug 2024 • Minghao Liu, Zonglin Di, Jiaheng Wei, Zhongruo Wang, Hengxiang Zhang, Ruixuan Xiao, Haoyu Wang, Jinlong Pang, Hao Chen, Ankit Shah, Hongxin Wei, Xinlei He, Zhaowei Zhao, Haobo Wang, Lei Feng, Jindong Wang, James Davis, Yang Liu
Furthermore, we design three benchmark datasets focused on label noise detection, label noise learning, and class-imbalanced learning.
1 code implementation • 27 May 2024 • Hongfu Gao, Feipeng Zhang, Wenyu Jiang, Jun Shu, Feng Zheng, Hongxin Wei
In this work, we show that, on text generation tasks, noisy annotations significantly hurt the performance of in-context learning.
2 code implementations • 23 Mar 2024 • Huiping Zhuang, Yizhu Chen, Di Fang, Run He, Kai Tong, Hongxin Wei, Ziqian Zeng, Cen Chen
The GACL adopts analytic learning (a gradient-free training technique) and delivers an analytical (i. e., closed-form) solution to the GCIL scenario.
no code implementations • 11 Mar 2024 • Hao Chen, Jindong Wang, Zihan Wang, Ran Tao, Hongxin Wei, Xing Xie, Masashi Sugiyama, Bhiksha Raj
Foundation models are usually pre-trained on large-scale datasets and then adapted to downstream tasks through tuning.
1 code implementation • 20 Feb 2024 • Jianguo Huang, Jianqing Song, Xuanning Zhou, BingYi Jing, Hongxin Wei
Conformal Prediction (CP) has attracted great attention from the research community due to its strict theoretical guarantees.
1 code implementation • 8 Feb 2024 • Zhenlong Liu, Lei Feng, Huiping Zhuang, Xiaofeng Cao, Hongxin Wei
In this work, we propose a novel method -- Convex-Concave Loss, which enables a high variance of training loss distribution by gradient descent.
no code implementations • 8 Feb 2024 • Wenyu Jiang, Zhenlong Liu, Zejian Xie, Songxin Zhang, BingYi Jing, Hongxin Wei
In this paper, we propose a straightforward, novel, and training-free hardness score named Distorting-based Learning Complexity (DLC), to identify informative images and instructions from the downstream dataset efficiently.
1 code implementation • 7 Feb 2024 • Shuoyuan Wang, Jindong Wang, Guoqing Wang, Bob Zhang, Kaiyang Zhou, Hongxin Wei
Vision-language models (VLMs) have emerged as formidable tools, showing their strong capability in handling various open-vocabulary tasks in image recognition, text-driven visual content generation, and visual chatbots, to name a few.
1 code implementation • 6 Feb 2024 • Huajun Xi, Jianguo Huang, Kangdao Liu, Lei Feng, Hongxin Wei
To address this issue, we propose Conformal Temperature Scaling (ConfTS), a variant of temperature scaling with a novel loss function designed to enhance the efficiency of prediction sets.
no code implementations • 15 Nov 2023 • Xiaobo Xia, Jiale Liu, Shaokun Zhang, Qingyun Wu, Hongxin Wei, Tongliang Liu
Coreset selection is powerful in reducing computational costs and accelerating data processing for deep learning algorithms.
1 code implementation • 28 Oct 2023 • Shuoyuan Wang, Jindong Wang, Huajun Xi, Bob Zhang, Lei Zhang, Hongxin Wei
However, the high computational cost of optimization-based TTA algorithms makes it intractable to run on resource-constrained edge devices.
2 code implementations • 10 Oct 2023 • Jianguo Huang, Huajun Xi, Linjun Zhang, Huaxiu Yao, Yue Qiu, Hongxin Wei
Conformal prediction is a statistical framework that generates prediction sets containing ground-truth labels with a desired coverage guarantee.
1 code implementation • 29 Sep 2023 • Hao Chen, Jindong Wang, Ankit Shah, Ran Tao, Hongxin Wei, Xing Xie, Masashi Sugiyama, Bhiksha Raj
This paper aims to understand the nature of noise in pre-training datasets and to mitigate its impact on downstream tasks.
1 code implementation • 12 Jun 2023 • Senlin Shu, Shuo He, Haobo Wang, Hongxin Wei, Tao Xiang, Lei Feng
In this paper, we propose a generalized URE that can be equipped with arbitrary loss functions while maintaining the theoretical guarantees, given unlabeled data for LAC.
2 code implementations • 3 Jun 2023 • Wenyu Jiang, Hao Cheng, Mingcai Chen, Chongjun Wang, Hongxin Wei
Modern neural networks are known to give overconfident prediction for out-of-distribution inputs when deployed in the open world.
no code implementations • CVPR 2024 • Shiyu Tian, Hongxin Wei, Yiqun Wang, Lei Feng
In this paper, we propose a new method called CroSel, which leverages historical predictions from the model to identify true labels for most training examples.
no code implementations • 8 Dec 2022 • Hongxin Wei, Huiping Zhuang, Renchunzi Xie, Lei Feng, Gang Niu, Bo An, Yixuan Li
In the presence of noisy labels, designing robust loss functions is critical for securing the generalization performance of deep neural networks.
3 code implementations • 17 Jun 2022 • Hongxin Wei, Lue Tao, Renchunzi Xie, Lei Feng, Bo An
Deep neural networks usually perform poorly when the training dataset suffers from extreme class imbalance.
1 code implementation • 30 May 2022 • Huiping Zhuang, Zhenyu Weng, Hongxin Wei, Renchunzi Xie, Kar-Ann Toh, Zhiping Lin
Class-incremental learning (CIL) learns a classification model with training data of different classes arising progressively.
2 code implementations • 19 May 2022 • Hongxin Wei, Renchunzi Xie, Hao Cheng, Lei Feng, Bo An, Yixuan Li
Our method is motivated by the analysis that the norm of the logit keeps increasing during training, leading to overconfident output.
1 code implementation • 31 Jan 2022 • Lue Tao, Lei Feng, Hongxin Wei, JinFeng Yi, Sheng-Jun Huang, Songcan Chen
Under this threat, we show that adversarial training using a conventional defense budget $\epsilon$ provably fails to provide test robustness in a simple statistical setting, where the non-robust features of the training data can be reinforced by $\epsilon$-bounded perturbation.
3 code implementations • 16 Jan 2022 • Renchunzi Xie, Hongxin Wei, Lei Feng, Bo An
Although there have been a few studies on this problem, most of them only exploit unidirectional relationships from the source domain to the target domain.
no code implementations • 17 Oct 2021 • Ziqi Zhang, Yuexiang Li, Hongxin Wei, Kai Ma, Tao Xu, Yefeng Zheng
The hard samples, which are beneficial for classifier learning, are often mistakenly treated as noises in such a setting since both the hard samples and ones with noisy labels lead to a relatively larger loss value than the easy cases.
no code implementations • 29 Sep 2021 • Hongxin Wei, Lue Tao, Renchunzi Xie, Lei Feng, Bo An
Deep neural networks usually perform poorly when the training dataset suffers from extreme class imbalance.
4 code implementations • NeurIPS 2021 • Hongxin Wei, Lue Tao, Renchunzi Xie, Bo An
Learning with noisy labels is a practically challenging problem in weakly supervised learning.
no code implementations • 23 Dec 2020 • Rundong Wang, Hongxin Wei, Bo An, Zhouyan Feng, Jun Yao
Portfolio management via reinforcement learning is at the forefront of fintech research, which explores how to optimally reallocate a fund into different financial assets over the long term by trial-and-error.
no code implementations • 9 Dec 2020 • Hongxin Wei, Lei Feng, Rundong Wang, Bo An
Deep neural networks have been shown to easily overfit to biased training data with label noise or class imbalance.
2 code implementations • CVPR 2020 • Hongxin Wei, Lei Feng, Xiangyu Chen, Bo An
The state-of-the-art approaches "Decoupling" and "Co-teaching+" claim that the "disagreement" strategy is crucial for alleviating the problem of learning with noisy labels.
Ranked #11 on Learning with noisy labels on CIFAR-10N-Random3