Search Results for author: Hongxin Wei

Found 34 papers, 19 papers with code

Analytic Continual Test-Time Adaptation for Multi-Modality Corruption

no code implementations29 Oct 2024 Yufei Zhang, Yicheng Xu, Hongxin Wei, Zhiping Lin, Huiping Zhuang

We innovatively introduce analytic learning into TTA, using the Analytic Classifiers (ACs) to prevent model forgetting.

Pseudo Label Test-time Adaptation

ChineseSafe: A Chinese Benchmark for Evaluating Safety in Large Language Models

no code implementations24 Oct 2024 Hengxiang Zhang, Hongfu Gao, Qiang Hu, Guanhua Chen, Lili Yang, BingYi Jing, Hongxin Wei, Bing Wang, Haifeng Bai, Lei Yang

While previous works have introduced several benchmarks to evaluate the safety risk of LLMs, the community still has a limited understanding of current LLMs' capability to recognize illegal and unsafe content in Chinese contexts.

C-Adapter: Adapting Deep Classifiers for Efficient Conformal Prediction Sets

1 code implementation12 Oct 2024 Kangdao Liu, Hao Zeng, Jianguo Huang, Huiping Zhuang, Chi-Man Vong, Hongxin Wei

Conformal prediction, as an emerging uncertainty quantification technique, typically functions as post-hoc processing for the outputs of trained classifiers.

Conformal Prediction Uncertainty Quantification

Fine-tuning can Help Detect Pretraining Data from Large Language Models

no code implementations9 Oct 2024 Hengxiang Zhang, Songxin Zhang, BingYi Jing, Hongxin Wei

In light of this, we introduce a novel and effective method termed Fine-tuned Score Deviation (FSD), which improves the performance of current scoring functions for pretraining data detection.

Defending Membership Inference Attacks via Privacy-aware Sparsity Tuning

no code implementations9 Oct 2024 Qiang Hu, Hengxiang Zhang, Hongxin Wei

Over-parameterized models are typically vulnerable to membership inference attacks, which aim to determine whether a specific sample is included in the training of a given model.

Understanding and Mitigating Miscalibration in Prompt Tuning for Vision-Language Models

no code implementations3 Oct 2024 Shuoyuan Wang, Yixuan Li, Hongxin Wei

In this work, we demonstrate that existing prompt tuning methods usually lead to a trade-off of calibration between base and new classes: the cross-entropy loss in CoOp causes overconfidence in new classes by increasing textual label divergence, whereas the regularization of KgCoOp maintains the confidence level but results in underconfidence in base classes due to the improved accuracy.

On the Noise Robustness of In-Context Learning for Text Generation

1 code implementation27 May 2024 Hongfu Gao, Feipeng Zhang, Wenyu Jiang, Jun Shu, Feng Zheng, Hongxin Wei

In this work, we show that, on text generation tasks, noisy annotations significantly hurt the performance of in-context learning.

In-Context Learning text-classification +2

GACL: Exemplar-Free Generalized Analytic Continual Learning

2 code implementations23 Mar 2024 Huiping Zhuang, Yizhu Chen, Di Fang, Run He, Kai Tong, Hongxin Wei, Ziqian Zeng, Cen Chen

The GACL adopts analytic learning (a gradient-free training technique) and delivers an analytical (i. e., closed-form) solution to the GCIL scenario.

class-incremental learning Class Incremental Learning +1

Learning with Noisy Foundation Models

no code implementations11 Mar 2024 Hao Chen, Jindong Wang, Zihan Wang, Ran Tao, Hongxin Wei, Xing Xie, Masashi Sugiyama, Bhiksha Raj

Foundation models are usually pre-trained on large-scale datasets and then adapted to downstream tasks through tuning.

TorchCP: A Python Library for Conformal Prediction

1 code implementation20 Feb 2024 Jianguo Huang, Jianqing Song, Xuanning Zhou, BingYi Jing, Hongxin Wei

Conformal Prediction (CP) has attracted great attention from the research community due to its strict theoretical guarantees.

Conformal Prediction Deep Learning +1

Mitigating Privacy Risk in Membership Inference by Convex-Concave Loss

1 code implementation8 Feb 2024 Zhenlong Liu, Lei Feng, Huiping Zhuang, Xiaofeng Cao, Hongxin Wei

In this work, we propose a novel method -- Convex-Concave Loss, which enables a high variance of training loss distribution by gradient descent.

Exploring Learning Complexity for Efficient Downstream Dataset Pruning

no code implementations8 Feb 2024 Wenyu Jiang, Zhenlong Liu, Zejian Xie, Songxin Zhang, BingYi Jing, Hongxin Wei

In this paper, we propose a straightforward, novel, and training-free hardness score named Distorting-based Learning Complexity (DLC), to identify informative images and instructions from the downstream dataset efficiently.

Informativeness

Open-Vocabulary Calibration for Fine-tuned CLIP

1 code implementation7 Feb 2024 Shuoyuan Wang, Jindong Wang, Guoqing Wang, Bob Zhang, Kaiyang Zhou, Hongxin Wei

Vision-language models (VLMs) have emerged as formidable tools, showing their strong capability in handling various open-vocabulary tasks in image recognition, text-driven visual content generation, and visual chatbots, to name a few.

parameter-efficient fine-tuning

Does confidence calibration improve conformal prediction?

1 code implementation6 Feb 2024 Huajun Xi, Jianguo Huang, Kangdao Liu, Lei Feng, Hongxin Wei

To address this issue, we propose Conformal Temperature Scaling (ConfTS), a variant of temperature scaling with a novel loss function designed to enhance the efficiency of prediction sets.

Conformal Prediction Uncertainty Quantification

Refined Coreset Selection: Towards Minimal Coreset Size under Model Performance Constraints

no code implementations15 Nov 2023 Xiaobo Xia, Jiale Liu, Shaokun Zhang, Qingyun Wu, Hongxin Wei, Tongliang Liu

Coreset selection is powerful in reducing computational costs and accelerating data processing for deep learning algorithms.

Optimization-Free Test-Time Adaptation for Cross-Person Activity Recognition

1 code implementation28 Oct 2023 Shuoyuan Wang, Jindong Wang, Huajun Xi, Bob Zhang, Lei Zhang, Hongxin Wei

However, the high computational cost of optimization-based TTA algorithms makes it intractable to run on resource-constrained edge devices.

Computational Efficiency Human Activity Recognition +2

Conformal Prediction for Deep Classifier via Label Ranking

2 code implementations10 Oct 2023 Jianguo Huang, Huajun Xi, Linjun Zhang, Huaxiu Yao, Yue Qiu, Hongxin Wei

Conformal prediction is a statistical framework that generates prediction sets containing ground-truth labels with a desired coverage guarantee.

Conformal Prediction

Understanding and Mitigating the Label Noise in Pre-training on Downstream Tasks

1 code implementation29 Sep 2023 Hao Chen, Jindong Wang, Ankit Shah, Ran Tao, Hongxin Wei, Xing Xie, Masashi Sugiyama, Bhiksha Raj

This paper aims to understand the nature of noise in pre-training datasets and to mitigate its impact on downstream tasks.

A Generalized Unbiased Risk Estimator for Learning with Augmented Classes

1 code implementation12 Jun 2023 Senlin Shu, Shuo He, Haobo Wang, Hongxin Wei, Tao Xiang, Lei Feng

In this paper, we propose a generalized URE that can be equipped with arbitrary loss functions while maintaining the theoretical guarantees, given unlabeled data for LAC.

Multi-class Classification

DOS: Diverse Outlier Sampling for Out-of-Distribution Detection

2 code implementations3 Jun 2023 Wenyu Jiang, Hao Cheng, Mingcai Chen, Chongjun Wang, Hongxin Wei

Modern neural networks are known to give overconfident prediction for out-of-distribution inputs when deployed in the open world.

Diversity Out-of-Distribution Detection

CroSel: Cross Selection of Confident Pseudo Labels for Partial-Label Learning

no code implementations CVPR 2024 Shiyu Tian, Hongxin Wei, Yiqun Wang, Lei Feng

In this paper, we propose a new method called CroSel, which leverages historical predictions from the model to identify true labels for most training examples.

Partial Label Learning Weakly-supervised Learning

Mitigating Memorization of Noisy Labels by Clipping the Model Prediction

no code implementations8 Dec 2022 Hongxin Wei, Huiping Zhuang, Renchunzi Xie, Lei Feng, Gang Niu, Bo An, Yixuan Li

In the presence of noisy labels, designing robust loss functions is critical for securing the generalization performance of deep neural networks.

Memorization

Open-Sampling: Exploring Out-of-Distribution data for Re-balancing Long-tailed datasets

3 code implementations17 Jun 2022 Hongxin Wei, Lue Tao, Renchunzi Xie, Lei Feng, Bo An

Deep neural networks usually perform poorly when the training dataset suffers from extreme class imbalance.

Mitigating Neural Network Overconfidence with Logit Normalization

2 code implementations19 May 2022 Hongxin Wei, Renchunzi Xie, Hao Cheng, Lei Feng, Bo An, Yixuan Li

Our method is motivated by the analysis that the norm of the logit keeps increasing during training, leading to overconfident output.

Can Adversarial Training Be Manipulated By Non-Robust Features?

1 code implementation31 Jan 2022 Lue Tao, Lei Feng, Hongxin Wei, JinFeng Yi, Sheng-Jun Huang, Songcan Chen

Under this threat, we show that adversarial training using a conventional defense budget $\epsilon$ provably fails to provide test robustness in a simple statistical setting, where the non-robust features of the training data can be reinforced by $\epsilon$-bounded perturbation.

GearNet: Stepwise Dual Learning for Weakly Supervised Domain Adaptation

3 code implementations16 Jan 2022 Renchunzi Xie, Hongxin Wei, Lei Feng, Bo An

Although there have been a few studies on this problem, most of them only exploit unidirectional relationships from the source domain to the target domain.

Domain Adaptation

Alleviating Noisy-label Effects in Image Classification via Probability Transition Matrix

no code implementations17 Oct 2021 Ziqi Zhang, Yuexiang Li, Hongxin Wei, Kai Ma, Tao Xu, Yefeng Zheng

The hard samples, which are beneficial for classifier learning, are often mistakenly treated as noises in such a setting since both the hard samples and ones with noisy labels lead to a relatively larger loss value than the easy cases.

Image Classification

Open-sampling: Re-balancing Long-tailed Datasets with Out-of-Distribution Data

no code implementations29 Sep 2021 Hongxin Wei, Lue Tao, Renchunzi Xie, Lei Feng, Bo An

Deep neural networks usually perform poorly when the training dataset suffers from extreme class imbalance.

Deep Stock Trading: A Hierarchical Reinforcement Learning Framework for Portfolio Optimization and Order Execution

no code implementations23 Dec 2020 Rundong Wang, Hongxin Wei, Bo An, Zhouyan Feng, Jun Yao

Portfolio management via reinforcement learning is at the forefront of fintech research, which explores how to optimally reallocate a fund into different financial assets over the long term by trial-and-error.

Hierarchical Reinforcement Learning Management +2

MetaInfoNet: Learning Task-Guided Information for Sample Reweighting

no code implementations9 Dec 2020 Hongxin Wei, Lei Feng, Rundong Wang, Bo An

Deep neural networks have been shown to easily overfit to biased training data with label noise or class imbalance.

Meta-Learning

Combating noisy labels by agreement: A joint training method with co-regularization

2 code implementations CVPR 2020 Hongxin Wei, Lei Feng, Xiangyu Chen, Bo An

The state-of-the-art approaches "Decoupling" and "Co-teaching+" claim that the "disagreement" strategy is crucial for alleviating the problem of learning with noisy labels.

Diversity Learning with noisy labels +1

Cannot find the paper you are looking for? You can Submit a new open access paper.