Search Results for author: Xiaobo Xia

Found 34 papers, 14 papers with code

Few-Shot Adversarial Prompt Learning on Vision-Language Models

no code implementations21 Mar 2024 Yiwei Zhou, Xiaobo Xia, Zhiwei Lin, Bo Han, Tongliang Liu

The vulnerability of deep neural networks to imperceptible adversarial perturbations has attracted widespread attention.

Adversarial Robustness Adversarial Text

Tackling Noisy Labels with Network Parameter Additive Decomposition

no code implementations20 Mar 2024 Jingyi Wang, Xiaobo Xia, Long Lan, Xinghao Wu, Jun Yu, Wenjing Yang, Bo Han, Tongliang Liu

Given data with noisy labels, over-parameterized deep networks suffer overfitting mislabeled data, resulting in poor generalization.

Memorization

Mitigating Label Noise on Graph via Topological Sample Selection

no code implementations4 Mar 2024 Yuhao Wu, Jiangchao Yao, Xiaobo Xia, Jun Yu, Ruxin Wang, Bo Han, Tongliang Liu

Despite the success of the carefully-annotated benchmarks, the effectiveness of existing graph neural networks (GNNs) can be considerably impaired in practice when the real-world graph data is noisily labeled.

Learning with noisy labels

Open-Vocabulary Segmentation with Unpaired Mask-Text Supervision

1 code implementation14 Feb 2024 Zhaoqing Wang, Xiaobo Xia, Ziye Chen, Xiao He, Yandong Guo, Mingming Gong, Tongliang Liu

With this unpaired mask-text supervision, we propose a new weakly-supervised open-vocabulary segmentation framework (Uni-OVSeg) that leverages confident pairs of mask predictions and entities in text descriptions.

Language Modelling

One Shot Learning as Instruction Data Prospector for Large Language Models

1 code implementation16 Dec 2023 Yunshui Li, Binyuan Hui, Xiaobo Xia, Jiaxi Yang, Min Yang, Lei Zhang, Shuzheng Si, Junhao Liu, Tongliang Liu, Fei Huang, Yongbin Li

Nuggets assesses the potential of individual instruction examples to act as effective one shot examples, thereby identifying those that can significantly enhance diverse task performance.

One-Shot Learning

ERASE: Error-Resilient Representation Learning on Graphs for Label Noise Tolerance

1 code implementation13 Dec 2023 Ling-Hao Chen, Yuanshuo Zhang, Taohua Huang, Liangcai Su, Zeyi Lin, Xi Xiao, Xiaobo Xia, Tongliang Liu

To tackle this challenge and enhance the robustness of deep learning models against label noise in graph-based tasks, we propose a method called ERASE (Error-Resilient representation learning on graphs for lAbel noiSe tolerancE).

Denoising Node Classification +1

Refined Coreset Selection: Towards Minimal Coreset Size under Model Performance Constraints

no code implementations15 Nov 2023 Xiaobo Xia, Jiale Liu, Shaokun Zhang, Qingyun Wu, Hongxin Wei, Tongliang Liu

Coreset selection is powerful in reducing computational costs and accelerating data processing for deep learning algorithms.

Out-of-distribution Detection Learning with Unreliable Out-of-distribution Sources

1 code implementation NeurIPS 2023 Haotian Zheng, Qizhou Wang, Zhen Fang, Xiaobo Xia, Feng Liu, Tongliang Liu, Bo Han

To this end, we suggest that generated data (with mistaken OOD generation) can be used to devise an auxiliary OOD detection task to facilitate real OOD detection.

Out-of-Distribution Detection Out of Distribution (OOD) Detection +1

IDEAL: Influence-Driven Selective Annotations Empower In-Context Learners in Large Language Models

no code implementations16 Oct 2023 Shaokun Zhang, Xiaobo Xia, Zhaoqing Wang, Ling-Hao Chen, Jiale Liu, Qingyun Wu, Tongliang Liu

However, since the prompts need to be sampled from a large volume of annotated examples, finding the right prompt may result in high annotation costs.

In-Context Learning

Multi-Label Noise Transition Matrix Estimation with Label Correlations: Theory and Algorithm

1 code implementation22 Sep 2023 Shikun Li, Xiaobo Xia, Hansong Zhang, Shiming Ge, Tongliang Liu

However, estimating multi-label noise transition matrices remains a challenging task, as most existing estimators in noisy multi-class learning rely on anchor points and accurate fitting of noisy class posteriors, which is hard to satisfy in noisy multi-label learning.

Multi-Label Learning

Regularly Truncated M-estimators for Learning with Noisy Labels

1 code implementation2 Sep 2023 Xiaobo Xia, Pengqian Lu, Chen Gong, Bo Han, Jun Yu, Tongliang Liu

However, such a procedure is arguably debatable from two folds: (a) it does not consider the bad influence of noisy labels in selected small-loss examples; (b) it does not make good use of the discarded large-loss examples, which may be clean or have meaningful information for generalization.

Learning with noisy labels

Making Binary Classification from Multiple Unlabeled Datasets Almost Free of Supervision

no code implementations12 Jun 2023 Yuhao Wu, Xiaobo Xia, Jun Yu, Bo Han, Gang Niu, Masashi Sugiyama, Tongliang Liu

Training a classifier exploiting a huge amount of supervised data is expensive or even prohibited in a situation, where the labeling cost is high.

Binary Classification Pseudo Label

Transferring Annotator- and Instance-dependent Transition Matrix for Learning from Crowds

1 code implementation5 Jun 2023 Shikun Li, Xiaobo Xia, Jiankang Deng, Shiming Ge, Tongliang Liu

In real-world crowd-sourcing scenarios, noise transition matrices are both annotator- and instance-dependent.

Transfer Learning

Robust Generalization against Photon-Limited Corruptions via Worst-Case Sharpness Minimization

2 code implementations CVPR 2023 Zhuo Huang, Miaoxi Zhu, Xiaobo Xia, Li Shen, Jun Yu, Chen Gong, Bo Han, Bo Du, Tongliang Liu

Experimentally, we simulate photon-limited corruptions using CIFAR10/100 and ImageNet30 datasets and show that SharpDRO exhibits a strong generalization ability against severe corruptions and exceeds well-known baseline methods with large performance gains.

Dynamics-Aware Loss for Learning with Label Noise

1 code implementation21 Mar 2023 Xiu-Chuan Li, Xiaobo Xia, Fei Zhu, Tongliang Liu, Xu-Yao Zhang, Cheng-Lin Liu

Label noise poses a serious threat to deep neural networks (DNNs).

Holistic Label Correction for Noisy Multi-Label Classification

no code implementations ICCV 2023 Xiaobo Xia, Jiankang Deng, Wei Bao, Yuxuan Du, Bo Han, Shiguang Shan, Tongliang Liu

The issues are, that we do not understand why label dependence is helpful in the problem, and how to learn and utilize label dependence only using training data with noisy multiple labels.

Classification Memorization +1

Combating Noisy Labels with Sample Selection by Mining High-Discrepancy Examples

no code implementations ICCV 2023 Xiaobo Xia, Bo Han, Yibing Zhan, Jun Yu, Mingming Gong, Chen Gong, Tongliang Liu

As selected data have high discrepancies in probabilities, the divergence of two networks can be maintained by training on such data.

Learning with noisy labels

Harnessing Out-Of-Distribution Examples via Augmenting Content and Style

1 code implementation7 Jul 2022 Zhuo Huang, Xiaobo Xia, Li Shen, Bo Han, Mingming Gong, Chen Gong, Tongliang Liu

Machine learning models are vulnerable to Out-Of-Distribution (OOD) examples, and such a problem has drawn much attention.

Data Augmentation Disentanglement +3

Pluralistic Image Completion with Probabilistic Mixture-of-Experts

no code implementations18 May 2022 Xiaobo Xia, Wenhao Yang, Jie Ren, Yewen Li, Yibing Zhan, Bo Han, Tongliang Liu

Second, the constraints for diversity are designed to be task-agnostic, which causes the constraints to not work well.

Selective-Supervised Contrastive Learning with Noisy Labels

1 code implementation CVPR 2022 Shikun Li, Xiaobo Xia, Shiming Ge, Tongliang Liu

In the selection process, by measuring the agreement between learned representations and given labels, we first identify confident examples that are exploited to build confident pairs.

Contrastive Learning Learning with noisy labels +1

Objects in Semantic Topology

no code implementations ICLR 2022 Shuo Yang, Peize Sun, Yi Jiang, Xiaobo Xia, Ruiheng Zhang, Zehuan Yuan, Changhu Wang, Ping Luo, Min Xu

A more realistic object detection paradigm, Open-World Object Detection, has arisen increasing research interests in the community recently.

Incremental Learning Language Modelling +3

Co-variance: Tackling Noisy Labels with Sample Selection by Emphasizing High-variance Examples

no code implementations29 Sep 2021 Xiaobo Xia, Bo Han, Yibing Zhan, Jun Yu, Mingming Gong, Chen Gong, Tongliang Liu

The sample selection approach is popular in learning with noisy labels, which tends to select potentially clean data out of noisy data for robust training.

Learning with noisy labels

Kernel Mean Estimation by Marginalized Corrupted Distributions

no code implementations10 Jul 2021 Xiaobo Xia, Shuo Shan, Mingming Gong, Nannan Wang, Fei Gao, Haikun Wei, Tongliang Liu

Estimating the kernel mean in a reproducing kernel Hilbert space is a critical component in many kernel learning algorithms.

Sample Selection with Uncertainty of Losses for Learning with Noisy Labels

no code implementations NeurIPS 2021 Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Jun Yu, Gang Niu, Masashi Sugiyama

In this way, we also give large-loss but less selected data a try; then, we can better distinguish between the cases (a) and (b) by seeing if the losses effectively decrease with the uncertainty after the try.

Learning with noisy labels

Instance Correction for Learning with Open-set Noisy Labels

no code implementations1 Jun 2021 Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Jun Yu, Gang Niu, Masashi Sugiyama

Lots of approaches, e. g., loss correction and label correction, cannot handle such open-set noisy labels well, since they need training data and test data to share the same label space, which does not hold for learning with open-set noisy labels.

Extended T: Learning with Mixed Closed-set and Open-set Noisy Labels

no code implementations2 Dec 2020 Xiaobo Xia, Tongliang Liu, Bo Han, Nannan Wang, Jiankang Deng, Jiatong Li, Yinian Mao

The traditional transition matrix is limited to model closed-set label noise, where noisy training data has true class labels within the noisy label set.

Class2Simi: A New Perspective on Learning with Label Noise

no code implementations28 Sep 2020 Songhua Wu, Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Nannan Wang, Haifeng Liu, Gang Niu

It is worthwhile to perform the transformation: We prove that the noise rate for the noisy similarity labels is lower than that of the noisy class labels, because similarity labels themselves are robust to noise.

Part-dependent Label Noise: Towards Instance-dependent Label Noise

1 code implementation NeurIPS 2020 Xiaobo Xia, Tongliang Liu, Bo Han, Nannan Wang, Mingming Gong, Haifeng Liu, Gang Niu, DaCheng Tao, Masashi Sugiyama

Learning with the \textit{instance-dependent} label noise is challenging, because it is hard to model such real-world noise.

Class2Simi: A Noise Reduction Perspective on Learning with Noisy Labels

no code implementations14 Jun 2020 Songhua Wu, Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Nannan Wang, Haifeng Liu, Gang Niu

To give an affirmative answer, in this paper, we propose a framework called Class2Simi: it transforms data points with noisy class labels to data pairs with noisy similarity labels, where a similarity label denotes whether a pair shares the class label or not.

Contrastive Learning Learning with noisy labels +1

Multi-Class Classification from Noisy-Similarity-Labeled Data

no code implementations16 Feb 2020 Songhua Wu, Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Nannan Wang, Haifeng Liu, Gang Niu

We further estimate the transition matrix from only noisy data and build a novel learning system to learn a classifier which can assign noise-free class labels for instances.

Classification General Classification +1

Are Anchor Points Really Indispensable in Label-Noise Learning?

1 code implementation NeurIPS 2019 Xiaobo Xia, Tongliang Liu, Nannan Wang, Bo Han, Chen Gong, Gang Niu, Masashi Sugiyama

Existing theories have shown that the transition matrix can be learned by exploiting \textit{anchor points} (i. e., data points that belong to a specific class almost surely).

Learning with noisy labels

Cannot find the paper you are looking for? You can Submit a new open access paper.