Search Results for author: Gang Niu

Found 118 papers, 54 papers with code

Analysis and Improvement of Policy Gradient Estimation

no code implementations NeurIPS 2011 Tingting Zhao, Hirotaka Hachiya, Gang Niu, Masashi Sugiyama

We also theoretically show that PGPE with the optimal baseline is more preferable than REINFORCE with the optimal baseline in terms of the variance of gradient estimates.

Policy Gradient Methods reinforcement-learning +1

Semi-Supervised Information-Maximization Clustering

no code implementations30 Apr 2013 Daniele Calandriello, Gang Niu, Masashi Sugiyama

Semi-supervised clustering aims to introduce prior knowledge in the decision process of a clustering algorithm.

Clustering

Transductive Learning with Multi-class Volume Approximation

no code implementations3 Feb 2014 Gang Niu, Bo Dai, Marthinus Christoffel du Plessis, Masashi Sugiyama

Given a hypothesis space, the large volume principle by Vladimir Vapnik prioritizes equivalence classes according to their volume in the hypothesis space.

Transductive Learning

Analysis of Learning from Positive and Unlabeled Data

no code implementations NeurIPS 2014 Marthinus C. Du Plessis, Gang Niu, Masashi Sugiyama

We next analyze the excess risk when the class prior is estimated from data, and show that the classification accuracy is not sensitive to class prior estimation if the unlabeled data is dominated by the positive data (this is naturally satisfied in inlier-based outlier detection because inliers are dominant in the unlabeled dataset).

General Classification Outlier Detection

Non-Gaussian Component Analysis with Log-Density Gradient Estimation

no code implementations28 Jan 2016 Hiroaki Sasaki, Gang Niu, Masashi Sugiyama

Non-Gaussian component analysis (NGCA) is aimed at identifying a linear subspace such that the projected data follows a non-Gaussian distribution.

Whitening-Free Least-Squares Non-Gaussian Component Analysis

1 code implementation3 Mar 2016 Hiroaki Shiino, Hiroaki Sasaki, Gang Niu, Masashi Sugiyama

Non-Gaussian component analysis (NGCA) is an unsupervised linear dimension reduction method that extracts low-dimensional non-Gaussian "signals" from high-dimensional data contaminated with Gaussian noise.

Dimensionality Reduction

Semi-Supervised Classification Based on Classification from Positive and Unlabeled Data

no code implementations ICML 2017 Tomoya Sakai, Marthinus Christoffel du Plessis, Gang Niu, Masashi Sugiyama

Most of the semi-supervised classification methods developed so far use unlabeled data for regularization purposes under particular distributional assumptions such as the cluster assumption.

Classification General Classification

Class-prior Estimation for Learning from Positive and Unlabeled Data

no code implementations5 Nov 2016 Marthinus C. du Plessis, Gang Niu, Masashi Sugiyama

Under the assumption that an additional labeled dataset is available, the class prior can be estimated by fitting a mixture of class-wise data distributions to the unlabeled data distribution.

Does Distributionally Robust Supervised Learning Give Robust Classifiers?

no code implementations ICML 2018 Weihua Hu, Gang Niu, Issei Sato, Masashi Sugiyama

Since the DRSL is explicitly formulated for a distribution shift scenario, we naturally expect it to give a robust classifier that can aggressively handle shifted distributions.

BIG-bench Machine Learning General Classification

Positive-Unlabeled Learning with Non-Negative Risk Estimator

1 code implementation NeurIPS 2017 Ryuichi Kiryo, Gang Niu, Marthinus C. Du Plessis, Masashi Sugiyama

From only positive (P) and unlabeled (U) data, a binary classifier could be trained with PU learning, in which the state of the art is unbiased PU learning.

Semi-Supervised AUC Optimization based on Positive-Unlabeled Learning

no code implementations4 May 2017 Tomoya Sakai, Gang Niu, Masashi Sugiyama

Maximizing the area under the receiver operating characteristic curve (AUC) is a standard approach to imbalanced classification.

imbalanced classification

Learning from Complementary Labels

1 code implementation NeurIPS 2017 Takashi Ishida, Gang Niu, Weihua Hu, Masashi Sugiyama

Collecting complementary labels would be less laborious than collecting ordinary labels, since users do not have to carefully choose the correct class from a long list of candidate classes.

Classification General Classification +1

Mode-Seeking Clustering and Density Ridge Estimation via Direct Estimation of Density-Derivative-Ratios

no code implementations6 Jul 2017 Hiroaki Sasaki, Takafumi Kanamori, Aapo Hyvärinen, Gang Niu, Masashi Sugiyama

Based on the proposed estimator, novel methods both for mode-seeking clustering and density ridge estimation are developed, and the respective convergence rates to the mode and ridge of the underlying density are also established.

Clustering Density Estimation

Information-Theoretic Representation Learning for Positive-Unlabeled Classification

no code implementations15 Oct 2017 Tomoya Sakai, Gang Niu, Masashi Sugiyama

Recent advances in weakly supervised classification allow us to train a classifier only from positive and unlabeled (PU) data.

Classification Dimensionality Reduction +3

Classification from Pairwise Similarity and Unlabeled Data

2 code implementations ICML 2018 Han Bao, Gang Niu, Masashi Sugiyama

Supervised learning needs a huge amount of labeled data, which can be a big bottleneck under the situation where there is a privacy concern or labeling cost is high.

Classification General Classification +1

Active Feature Acquisition with Supervised Matrix Completion

no code implementations15 Feb 2018 Sheng-Jun Huang, Miao Xu, Ming-Kun Xie, Masashi Sugiyama, Gang Niu, Songcan Chen

Feature missing is a serious problem in many applications, which may lead to low quality of training data and further significantly degrade the learning performance.

Matrix Completion

Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels

5 code implementations NeurIPS 2018 Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor Tsang, Masashi Sugiyama

Deep learning with noisy labels is practically challenging, as the capacity of deep models is so high that they can totally memorize these noisy labels sooner or later during training.

Learning with noisy labels Memorization

Masking: A New Perspective of Noisy Supervision

2 code implementations NeurIPS 2018 Bo Han, Jiangchao Yao, Gang Niu, Mingyuan Zhou, Ivor Tsang, Ya zhang, Masashi Sugiyama

It is important to learn various types of classifiers given training data with noisy labels.

Ranked #42 on Image Classification on Clothing1M (using extra training data)

Image Classification

Beyond Unfolding: Exact Recovery of Latent Convex Tensor Decomposition under Reshuffling

no code implementations22 May 2018 Chao Li, Mohammad Emtiyaz Khan, Zhun Sun, Gang Niu, Bo Han, Shengli Xie, Qibin Zhao

Exact recovery of tensor decomposition (TD) methods is a desirable property in both unsupervised learning and scientific data analysis.

Image Steganography Tensor Decomposition

Matrix Co-completion for Multi-label Classification with Missing Features and Labels

no code implementations23 May 2018 Miao Xu, Gang Niu, Bo Han, Ivor W. Tsang, Zhi-Hua Zhou, Masashi Sugiyama

We consider a challenging multi-label classification problem where both feature matrix $\X$ and label matrix $\Y$ have missing entries.

General Classification Matrix Completion +1

On the Minimal Supervision for Training Any Binary Classifier from Only Unlabeled Data

1 code implementation ICLR 2019 Nan Lu, Gang Niu, Aditya Krishna Menon, Masashi Sugiyama

In this paper, we study training arbitrary (from linear to deep) binary classifier from only unlabeled (U) data by ERM.

Alternate Estimation of a Classifier and the Class-Prior from Positive and Unlabeled Data

no code implementations15 Sep 2018 Masahiro Kato, Liyuan Xu, Gang Niu, Masashi Sugiyama

In this paper, we propose a novel unified approach to estimating the class-prior and training a classifier alternately.

Pumpout: A Meta Approach for Robustly Training Deep Neural Networks with Noisy Labels

no code implementations27 Sep 2018 Bo Han, Gang Niu, Jiangchao Yao, Xingrui Yu, Miao Xu, Ivor Tsang, Masashi Sugiyama

To handle these issues, by using the memorization effects of deep neural networks, we may train deep neural networks on the whole dataset only the first few iterations.

Memorization

Classification from Positive, Unlabeled and Biased Negative Data

1 code implementation ICLR 2019 Yu-Guan Hsieh, Gang Niu, Masashi Sugiyama

In binary classification, there are situations where negative (N) data are too diverse to be fully labeled and we often resort to positive-unlabeled (PU) learning in these scenarios.

Binary Classification Classification +1

Complementary-Label Learning for Arbitrary Losses and Models

1 code implementation Proceedings of the 36th International Conference on Machine Learning, 2019 Takashi Ishida, Gang Niu, Aditya Krishna Menon, Masashi Sugiyama

In contrast to the standard classification paradigm where the true class is given to each training pattern, complementary-label learning only uses training patterns each equipped with a complementary label, which only specifies one of the classes that the pattern does not belong to.

General Classification Image Classification

Revisiting Sample Selection Approach to Positive-Unlabeled Learning: Turning Unlabeled Data into Positive rather than Negative

no code implementations29 Jan 2019 Miao Xu, Bingcong Li, Gang Niu, Bo Han, Masashi Sugiyama

May there be a new sample selection method that can outperform the latest importance reweighting method in the deep learning age?

Memorization

A Pseudo-Label Method for Coarse-to-Fine Multi-Label Learning with Limited Supervision

no code implementations ICLR Workshop LLD 2019 Cheng-Yu Hsieh, Miao Xu, Gang Niu, Hsuan-Tien Lin, Masashi Sugiyama

To address the need, we propose a special weakly supervised MLL problem that not only focuses on the situation of limited fine-grained supervision but also leverages the hierarchical relationship between the coarse concepts and the fine-grained ones.

Meta-Learning Multi-Label Learning +1

Butterfly: One-step Approach towards Wildly Unsupervised Domain Adaptation

1 code implementation19 May 2019 Feng Liu, Jie Lu, Bo Han, Gang Niu, Guangquan Zhang, Masashi Sugiyama

Hence, we consider a new, more realistic and more challenging problem setting, where classifiers have to be trained with noisy labeled data from SD and unlabeled data from TD -- we name it wildly UDA (WUDA).

Unsupervised Domain Adaptation Wildly Unsupervised Domain Adaptation

Fast and Robust Rank Aggregation against Model Misspecification

1 code implementation29 May 2019 Yuangang Pan, WeiJie Chen, Gang Niu, Ivor W. Tsang, Masashi Sugiyama

Specifically, the properties of our CoarsenRank are summarized as follows: (1) CoarsenRank is designed for mild model misspecification, which assumes there exist the ideal preferences (consistent with model assumption) that locates in a neighborhood of the actual preferences.

Bayesian Inference

Uncoupled Regression from Pairwise Comparison Data

1 code implementation NeurIPS 2019 Liyuan Xu, Junya Honda, Gang Niu, Masashi Sugiyama

We propose two practical methods for uncoupled regression from pairwise comparison data and show that the learned regression model converges to the optimal model with the optimal parametric convergence rate when the target variable distributes uniformly.

Learning-To-Rank regression

Are Anchor Points Really Indispensable in Label-Noise Learning?

1 code implementation NeurIPS 2019 Xiaobo Xia, Tongliang Liu, Nannan Wang, Bo Han, Chen Gong, Gang Niu, Masashi Sugiyama

Existing theories have shown that the transition matrix can be learned by exploiting \textit{anchor points} (i. e., data points that belong to a specific class almost surely).

Learning with noisy labels

Direction Matters: On Influence-Preserving Graph Summarization and Max-cut Principle for Directed Graphs

no code implementations22 Jul 2019 Wenkai Xu, Gang Niu, Aapo Hyvärinen, Masashi Sugiyama

On the other hand, compressing the vertices while preserving the directed edge information provides a way to learn the small-scale representation of a directed graph.

Clustering

Wildly Unsupervised Domain Adaptation and Its Powerful and Efficient Solution

no code implementations25 Sep 2019 Feng Liu, Jie Lu, Bo Han, Gang Niu, Guangquan Zhang, Masashi Sugiyama

Hence, we consider a new, more realistic and more challenging problem setting, where classifiers have to be trained with noisy labeled data from SD and unlabeled data from TD---we name it wildly UDA (WUDA).

Unsupervised Domain Adaptation Wildly Unsupervised Domain Adaptation

Mitigating Overfitting in Supervised Classification from Two Unlabeled Datasets: A Consistent Risk Correction Approach

no code implementations20 Oct 2019 Nan Lu, Tianyi Zhang, Gang Niu, Masashi Sugiyama

The recently proposed unlabeled-unlabeled (UU) classification method allows us to train a binary classifier only from two unlabeled datasets with different class priors.

Classification General Classification

Scalable Evaluation and Improvement of Document Set Expansion via Neural Positive-Unlabeled Learning

1 code implementation EACL 2021 Alon Jacovi, Gang Niu, Yoav Goldberg, Masashi Sugiyama

We consider the situation in which a user has collected a small set of documents on a cohesive topic, and they want to retrieve additional documents on this topic from a large collection.

Information Retrieval Retrieval

Where is the Bottleneck of Adversarial Learning with Unlabeled Data?

no code implementations20 Nov 2019 Jingfeng Zhang, Bo Han, Gang Niu, Tongliang Liu, Masashi Sugiyama

Deep neural networks (DNNs) are incredibly brittle due to adversarial examples.

Learning with Multiple Complementary Labels

no code implementations ICML 2020 Lei Feng, Takuo Kaneko, Bo Han, Gang Niu, Bo An, Masashi Sugiyama

In this paper, we propose a novel problem setting to allow MCLs for each example and two ways for learning with MCLs.

Confidence Scores Make Instance-dependent Label-noise Learning Possible

no code implementations11 Jan 2020 Antonin Berthon, Bo Han, Gang Niu, Tongliang Liu, Masashi Sugiyama

We find with the help of confidence scores, the transition distribution of each instance can be approximately estimated.

Learning with noisy labels

Rethinking Class-Prior Estimation for Positive-Unlabeled Learning

no code implementations ICLR 2022 Yu Yao, Tongliang Liu, Bo Han, Mingming Gong, Gang Niu, Masashi Sugiyama, DaCheng Tao

Hitherto, the distributional-assumption-free CPE methods rely on a critical assumption that the support of the positive data distribution cannot be contained in the support of the negative data distribution.

valid

Multi-Class Classification from Noisy-Similarity-Labeled Data

no code implementations16 Feb 2020 Songhua Wu, Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Nannan Wang, Haifeng Liu, Gang Niu

We further estimate the transition matrix from only noisy data and build a novel learning system to learn a classifier which can assign noise-free class labels for instances.

Classification General Classification +1

Progressive Identification of True Labels for Partial-Label Learning

1 code implementation ICML 2020 Jiaqi Lv, Miao Xu, Lei Feng, Gang Niu, Xin Geng, Masashi Sugiyama

Partial-label learning (PLL) is a typical weakly supervised learning problem, where each training instance is equipped with a set of candidate labels among which only one is the true label.

Partial Label Learning Stochastic Optimization +1

Do We Need Zero Training Loss After Achieving Zero Training Error?

1 code implementation ICML 2020 Takashi Ishida, Ikko Yamane, Tomoya Sakai, Gang Niu, Masashi Sugiyama

We experimentally show that flooding improves performance and, as a byproduct, induces a double descent curve of the test loss.

Memorization

Attacks Which Do Not Kill Training Make Adversarial Learning Stronger

1 code implementation ICML 2020 Jingfeng Zhang, Xilie Xu, Bo Han, Gang Niu, Lizhen Cui, Masashi Sugiyama, Mohan Kankanhalli

Adversarial training based on the minimax formulation is necessary for obtaining adversarial robustness of trained models.

Adversarial Robustness

Rethinking Importance Weighting for Deep Learning under Distribution Shift

1 code implementation NeurIPS 2020 Tongtong Fang, Nan Lu, Gang Niu, Masashi Sugiyama

Under distribution shift (DS) where the training data distribution differs from the test one, a powerful technique is importance weighting (IW) which handles DS in two separate steps: weight estimation (WE) estimates the test-over-training density ratio and weighted classification (WC) trains the classifier from weighted training data.

Dual T: Reducing Estimation Error for Transition Matrix in Label-noise Learning

1 code implementation NeurIPS 2020 Yu Yao, Tongliang Liu, Bo Han, Mingming Gong, Jiankang Deng, Gang Niu, Masashi Sugiyama

By this intermediate class, the original transition matrix can then be factorized into the product of two easy-to-estimate transition matrices.

Part-dependent Label Noise: Towards Instance-dependent Label Noise

1 code implementation NeurIPS 2020 Xiaobo Xia, Tongliang Liu, Bo Han, Nannan Wang, Mingming Gong, Haifeng Liu, Gang Niu, DaCheng Tao, Masashi Sugiyama

Learning with the \textit{instance-dependent} label noise is challenging, because it is hard to model such real-world noise.

Class2Simi: A Noise Reduction Perspective on Learning with Noisy Labels

no code implementations14 Jun 2020 Songhua Wu, Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Nannan Wang, Haifeng Liu, Gang Niu

To give an affirmative answer, in this paper, we propose a framework called Class2Simi: it transforms data points with noisy class labels to data pairs with noisy similarity labels, where a similarity label denotes whether a pair shares the class label or not.

Contrastive Learning Learning with noisy labels +1

Unbiased Risk Estimators Can Mislead: A Case Study of Learning with Complementary Labels

no code implementations ICML 2020 Yu-Ting Chou, Gang Niu, Hsuan-Tien Lin, Masashi Sugiyama

In weakly supervised learning, unbiased risk estimator(URE) is a powerful tool for training classifiers when training and test data are drawn from different distributions.

Weakly-supervised Learning

Provably Consistent Partial-Label Learning

no code implementations NeurIPS 2020 Lei Feng, Jiaqi Lv, Bo Han, Miao Xu, Gang Niu, Xin Geng, Bo An, Masashi Sugiyama

Partial-label learning (PLL) is a multi-class classification problem, where each training example is associated with a set of candidate labels.

Multi-class Classification Partial Label Learning

Class2Simi: A New Perspective on Learning with Label Noise

no code implementations28 Sep 2020 Songhua Wu, Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Nannan Wang, Haifeng Liu, Gang Niu

It is worthwhile to perform the transformation: We prove that the noise rate for the noisy similarity labels is lower than that of the noisy class labels, because similarity labels themselves are robust to noise.

Geometry-aware Instance-reweighted Adversarial Training

2 code implementations ICLR 2021 Jingfeng Zhang, Jianing Zhu, Gang Niu, Bo Han, Masashi Sugiyama, Mohan Kankanhalli

The belief was challenged by recent studies where we can maintain the robustness and improve the accuracy.

Pointwise Binary Classification with Pairwise Confidence Comparisons

no code implementations5 Oct 2020 Lei Feng, Senlin Shu, Nan Lu, Bo Han, Miao Xu, Gang Niu, Bo An, Masashi Sugiyama

To alleviate the data requirement for training effective binary classifiers in binary classification, many weakly supervised learning settings have been proposed.

Binary Classification Classification +2

Maximum Mean Discrepancy Test is Aware of Adversarial Attacks

2 code implementations22 Oct 2020 Ruize Gao, Feng Liu, Jingfeng Zhang, Bo Han, Tongliang Liu, Gang Niu, Masashi Sugiyama

However, it has been shown that the MMD test is unaware of adversarial attacks -- the MMD test failed to detect the discrepancy between natural and adversarial data.

Adversarial Attack Detection

A Survey of Label-noise Representation Learning: Past, Present and Future

1 code implementation9 Nov 2020 Bo Han, Quanming Yao, Tongliang Liu, Gang Niu, Ivor W. Tsang, James T. Kwok, Masashi Sugiyama

Classical machine learning implicitly assumes that labels of the training data are sampled from a clean distribution, which can be too restrictive for real-world scenarios.

BIG-bench Machine Learning Learning Theory +1

SemiNLL: A Framework of Noisy-Label Learning by Semi-Supervised Learning

no code implementations2 Dec 2020 Zhuowei Wang, Jing Jiang, Bo Han, Lei Feng, Bo An, Gang Niu, Guodong Long

We also instantiate our framework with different combinations, which set the new state of the art on benchmark-simulated and real-world datasets with noisy labels.

Learning with noisy labels

On the Role of Pre-training for Meta Few-Shot Learning

no code implementations1 Jan 2021 Chia-You Chen, Hsuan-Tien Lin, Gang Niu, Masashi Sugiyama

One is to (pre-)train a classifier with examples from known classes, and then transfer the pre-trained classifier to unknown classes using the new examples.

Disentanglement Few-Shot Learning

Tackling Instance-Dependent Label Noise via a Universal Probabilistic Model

1 code implementation14 Jan 2021 Qizhou Wang, Bo Han, Tongliang Liu, Gang Niu, Jian Yang, Chen Gong

The drastic increase of data quantity often brings the severe decrease of data quality, such as incorrect label annotations, which poses a great challenge for robustly training Deep Neural Networks (DNNs).

Binary Classification from Multiple Unlabeled Datasets via Surrogate Set Classification

1 code implementation1 Feb 2021 Nan Lu, Shida Lei, Gang Niu, Issei Sato, Masashi Sugiyama

SSC can be solved by a standard (multi-class) classification method, and we use the SSC solution to obtain the final binary classifier through a certain linear-fractional transformation.

Binary Classification Classification +2

Learning Diverse-Structured Networks for Adversarial Robustness

1 code implementation3 Feb 2021 Xuefeng Du, Jingfeng Zhang, Bo Han, Tongliang Liu, Yu Rong, Gang Niu, Junzhou Huang, Masashi Sugiyama

In adversarial training (AT), the main focus has been the objective and optimizer while the model has been less studied, so that the models being used are still those classic ones in standard training (ST).

Adversarial Robustness

Provably End-to-end Label-Noise Learning without Anchor Points

1 code implementation4 Feb 2021 Xuefeng Li, Tongliang Liu, Bo Han, Gang Niu, Masashi Sugiyama

In label-noise learning, the transition matrix plays a key role in building statistically consistent classifiers.

Learning with noisy labels

Learning Noise Transition Matrix from Only Noisy Labels via Total Variation Regularization

1 code implementation4 Feb 2021 Yivan Zhang, Gang Niu, Masashi Sugiyama

To estimate the transition matrix from noisy data, existing methods often need to estimate the noisy class-posterior, which could be unreliable due to the overconfidence of neural networks.

Weakly Supervised Classification

Understanding the Interaction of Adversarial Training with Noisy Labels

no code implementations6 Feb 2021 Jianing Zhu, Jingfeng Zhang, Bo Han, Tongliang Liu, Gang Niu, Hongxia Yang, Mohan Kankanhalli, Masashi Sugiyama

A recent adversarial training (AT) study showed that the number of projected gradient descent (PGD) steps to successfully attack a point (i. e., find an adversarial example in its proximity) is an effective measure of the robustness of this point.

Meta Discovery: Learning to Discover Novel Classes given Very Limited Data

1 code implementation ICLR 2022 Haoang Chi, Feng Liu, Bo Han, Wenjing Yang, Long Lan, Tongliang Liu, Gang Niu, Mingyuan Zhou, Masashi Sugiyama

In this paper, we demystify assumptions behind NCD and find that high-level semantic features should be shared among the seen and unseen classes.

Clustering Meta-Learning +1

CIFS: Improving Adversarial Robustness of CNNs via Channel-wise Importance-based Feature Selection

2 code implementations10 Feb 2021 Hanshu Yan, Jingfeng Zhang, Gang Niu, Jiashi Feng, Vincent Y. F. Tan, Masashi Sugiyama

By comparing \textit{non-robust} (normally trained) and \textit{robustified} (adversarially trained) models, we observe that adversarial training (AT) robustifies CNNs by aligning the channel-wise activations of adversarial data with those of their natural counterparts.

Adversarial Robustness feature selection

Learning from Similarity-Confidence Data

no code implementations13 Feb 2021 Yuzhou Cao, Lei Feng, Yitian Xu, Bo An, Gang Niu, Masashi Sugiyama

Weakly supervised learning has drawn considerable attention recently to reduce the expensive time and labor consumption of labeling massive data.

Weakly-supervised Learning

Guided Interpolation for Adversarial Training

no code implementations15 Feb 2021 Chen Chen, Jingfeng Zhang, Xilie Xu, Tianlei Hu, Gang Niu, Gang Chen, Masashi Sugiyama

To enhance adversarial robustness, adversarial training learns deep neural networks on the adversarial variants generated by their natural data.

Adversarial Robustness

Estimating Instance-dependent Bayes-label Transition Matrix using a Deep Neural Network

no code implementations27 May 2021 Shuo Yang, Erkun Yang, Bo Han, Yang Liu, Min Xu, Gang Niu, Tongliang Liu

Motivated by that classifiers mostly output Bayes optimal labels for prediction, in this paper, we study to directly model the transition from Bayes optimal labels to noisy labels (i. e., Bayes-label transition matrix (BLTM)) and learn a classifier to predict Bayes optimal labels.

NoiLIn: Improving Adversarial Training and Correcting Stereotype of Noisy Labels

1 code implementation31 May 2021 Jingfeng Zhang, Xilie Xu, Bo Han, Tongliang Liu, Gang Niu, Lizhen Cui, Masashi Sugiyama

First, we thoroughly investigate noisy labels (NLs) injection into AT's inner maximization and outer minimization, respectively and obtain the observations on when NL injection benefits AT.

Adversarial Robustness

Instance Correction for Learning with Open-set Noisy Labels

no code implementations1 Jun 2021 Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Jun Yu, Gang Niu, Masashi Sugiyama

Lots of approaches, e. g., loss correction and label correction, cannot handle such open-set noisy labels well, since they need training data and test data to share the same label space, which does not hold for learning with open-set noisy labels.

Sample Selection with Uncertainty of Losses for Learning with Noisy Labels

no code implementations NeurIPS 2021 Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Jun Yu, Gang Niu, Masashi Sugiyama

In this way, we also give large-loss but less selected data a try; then, we can better distinguish between the cases (a) and (b) by seeing if the losses effectively decrease with the uncertainty after the try.

Learning with noisy labels

Reliable Adversarial Distillation with Unreliable Teachers

2 code implementations ICLR 2022 Jianing Zhu, Jiangchao Yao, Bo Han, Jingfeng Zhang, Tongliang Liu, Gang Niu, Jingren Zhou, Jianliang Xu, Hongxia Yang

However, when considering adversarial robustness, teachers may become unreliable and adversarial distillation may not work: teachers are pretrained on their own adversarial data, and it is too demanding to require that teachers are also good at every adversarial data queried by students.

Adversarial Robustness

On the Robustness of Average Losses for Partial-Label Learning

no code implementations11 Jun 2021 Jiaqi Lv, Biao Liu, Lei Feng, Ning Xu, Miao Xu, Bo An, Gang Niu, Xin Geng, Masashi Sugiyama

Partial-label learning (PLL) utilizes instances with PLs, where a PL includes several candidate labels but only one is the true label (TL).

Partial Label Learning Weakly Supervised Classification

Probabilistic Margins for Instance Reweighting in Adversarial Training

1 code implementation NeurIPS 2021 Qizhou Wang, Feng Liu, Bo Han, Tongliang Liu, Chen Gong, Gang Niu, Mingyuan Zhou, Masashi Sugiyama

Reweighting adversarial data during training has been recently shown to improve adversarial robustness, where data closer to the current decision boundaries are regarded as more critical and given larger weights.

Adversarial Robustness

Multi-Class Classification from Single-Class Data with Confidences

no code implementations16 Jun 2021 Yuzhou Cao, Lei Feng, Senlin Shu, Yitian Xu, Bo An, Gang Niu, Masashi Sugiyama

We show that without any assumptions on the loss functions, models, and optimizers, we can successfully learn a multi-class classifier from only data of a single class with a rigorous consistency guarantee when confidences (i. e., the class-posterior probabilities for all the classes) are available.

Classification Multi-class Classification

Local Reweighting for Adversarial Training

no code implementations30 Jun 2021 Ruize Gao, Feng Liu, Kaiwen Zhou, Gang Niu, Bo Han, James Cheng

However, when tested on attacks different from the given attack simulated in training, the robustness may drop significantly (e. g., even worse than no reweighting).

Understanding and Improving Early Stopping for Learning with Noisy Labels

1 code implementation NeurIPS 2021 Yingbin Bai, Erkun Yang, Bo Han, Yanhua Yang, Jiatong Li, Yinian Mao, Gang Niu, Tongliang Liu

Instead of the early stopping, which trains a whole DNN all at once, we initially train former DNN layers by optimizing the DNN with a relatively large number of epochs.

Learning with noisy labels Memorization

Instance-dependent Label-noise Learning under a Structural Causal Model

2 code implementations NeurIPS 2021 Yu Yao, Tongliang Liu, Mingming Gong, Bo Han, Gang Niu, Kun Zhang

In particular, we show that properly modeling the instances will contribute to the identifiability of the label noise transition matrix and thus lead to a better classifier.

Does Adversarial Robustness Really Imply Backdoor Vulnerability?

no code implementations29 Sep 2021 Yinghua Gao, Dongxian Wu, Jingfeng Zhang, Shu-Tao Xia, Gang Niu, Masashi Sugiyama

Based on thorough experiments, we find that such trade-off ignores the interactions between the perturbation budget of adversarial training and the magnitude of the backdoor trigger.

Adversarial Robustness

Understanding Generalized Label Smoothing when Learning with Noisy Labels

no code implementations29 Sep 2021 Jiaheng Wei, Hangyu Liu, Tongliang Liu, Gang Niu, Yang Liu

It was shown that LS serves as a regularizer for training data with hard labels and therefore improves the generalization of the model.

Learning with noisy labels

Active Refinement for Multi-Label Learning: A Pseudo-Label Approach

no code implementations29 Sep 2021 Cheng-Yu Hsieh, Wei-I Lin, Miao Xu, Gang Niu, Hsuan-Tien Lin, Masashi Sugiyama

The goal of multi-label learning (MLL) is to associate a given instance with its relevant labels from a set of concepts.

Active Learning Multi-Label Learning +1

Contrastive Label Disambiguation for Partial Label Learning

1 code implementation ICLR 2022 Haobo Wang, Ruixuan Xiao, Sharon Li, Lei Feng, Gang Niu, Gang Chen, Junbo Zhao

Partial label learning (PLL) is an important problem that allows each training example to be labeled with a coarse candidate set, which well suits many real-world data annotation scenarios with label ambiguity.

Contrastive Learning Partial Label Learning +2

Exploiting Class Activation Value for Partial-Label Learning

3 code implementations ICLR 2022 Fei Zhang, Lei Feng, Bo Han, Tongliang Liu, Gang Niu, Tao Qin, Masashi Sugiyama

As the first contribution, we empirically show that the class activation map (CAM), a simple technique for discriminating the learning patterns of each class in images, is surprisingly better at making accurate predictions than the model itself on selecting the true label from candidate labels.

Multi-class Classification Partial Label Learning

Can Label-Noise Transition Matrix Help to Improve Sample Selection and Label Correction?

no code implementations29 Sep 2021 Yu Yao, Xuefeng Li, Tongliang Liu, Alan Blair, Mingming Gong, Bo Han, Gang Niu, Masashi Sugiyama

Existing methods for learning with noisy labels can be generally divided into two categories: (1) sample selection and label correction based on the memorization effect of neural networks; (2) loss correction with the transition matrix.

Learning with noisy labels Memorization

Unsupervised Federated Learning is Possible

no code implementations ICLR 2022 Nan Lu, Zhao Wang, Xiaoxiao Li, Gang Niu, Qi Dou, Masashi Sugiyama

We propose federation of unsupervised learning (FedUL), where the unlabeled data are transformed into surrogate labeled data for each of the clients, a modified model is trained by supervised FL, and the wanted model is recovered from the modified model.

Federated Learning

Learning with Noisy Labels Revisited: A Study Using Real-World Human Annotations

2 code implementations ICLR 2022 Jiaheng Wei, Zhaowei Zhu, Hao Cheng, Tongliang Liu, Gang Niu, Yang Liu

These observations require us to rethink the treatment of noisy labels, and we hope the availability of these two datasets would facilitate the development and evaluation of future learning with noisy label solutions.

Benchmarking Learning with noisy labels +1

PiCO+: Contrastive Label Disambiguation for Robust Partial Label Learning

1 code implementation22 Jan 2022 Haobo Wang, Ruixuan Xiao, Yixuan Li, Lei Feng, Gang Niu, Gang Chen, Junbo Zhao

Partial label learning (PLL) is an important problem that allows each training example to be labeled with a coarse candidate set, which well suits many real-world data annotation scenarios with label ambiguity.

Contrastive Learning Partial Label Learning +2

On the Effectiveness of Adversarial Training against Backdoor Attacks

no code implementations22 Feb 2022 Yinghua Gao, Dongxian Wu, Jingfeng Zhang, Guanhao Gan, Shu-Tao Xia, Gang Niu, Masashi Sugiyama

To explore whether adversarial training could defend against backdoor attacks or not, we conduct extensive experiments across different threat models and perturbation budgets, and find the threat model in adversarial training matters.

Federated Learning from Only Unlabeled Data with Class-Conditional-Sharing Clients

1 code implementation7 Apr 2022 Nan Lu, Zhao Wang, Xiaoxiao Li, Gang Niu, Qi Dou, Masashi Sugiyama

We propose federation of unsupervised learning (FedUL), where the unlabeled data are transformed into surrogate labeled data for each of the clients, a modified model is trained by supervised FL, and the wanted model is recovered from the modified model.

Federated Learning

Instance-Dependent Label-Noise Learning with Manifold-Regularized Transition Matrix Estimation

no code implementations CVPR 2022 De Cheng, Tongliang Liu, Yixiong Ning, Nannan Wang, Bo Han, Gang Niu, Xinbo Gao, Masashi Sugiyama

In label-noise learning, estimating the transition matrix has attracted more and more attention as the matrix plays an important role in building statistically consistent classifiers.

Fast and Reliable Evaluation of Adversarial Robustness with Minimum-Margin Attack

1 code implementation15 Jun 2022 Ruize Gao, Jiongxiao Wang, Kaiwen Zhou, Feng Liu, Binghui Xie, Gang Niu, Bo Han, James Cheng

The AutoAttack (AA) has been the most reliable method to evaluate adversarial robustness when considerable computational resources are available.

Adversarial Robustness Computational Efficiency

FedMT: Federated Learning with Mixed-type Labels

no code implementations5 Oct 2022 Qiong Zhang, Jing Peng, Xin Zhang, Aline Talhouk, Gang Niu, Xiaoxiao Li

In federated learning (FL), classifiers (e. g., deep networks) are trained on datasets from multiple data centers without exchanging data across them, which improves the sample efficiency.

Federated Learning Vocal Bursts Type Prediction

Generalized Consistent Multi-Class Classification with Rejection to be Compatible with Arbitrary Losses

2 code implementations Conference 2022 Yuzhou Cao, Tianchi Cai, Lei Feng, Lihong Gu, Jinjie Gu, Bo An, Gang Niu, Masashi Sugiyama

\emph{Classification with rejection} (CwR) refrains from making a prediction to avoid critical misclassification when encountering test samples that are difficult to classify.

Classification Multi-class Classification

Mitigating Memorization of Noisy Labels by Clipping the Model Prediction

no code implementations8 Dec 2022 Hongxin Wei, Huiping Zhuang, Renchunzi Xie, Lei Feng, Gang Niu, Bo An, Yixuan Li

In the presence of noisy labels, designing robust loss functions is critical for securing the generalization performance of deep neural networks.

Memorization

Fairness Improves Learning from Noisily Labeled Long-Tailed Data

no code implementations22 Mar 2023 Jiaheng Wei, Zhaowei Zhu, Gang Niu, Tongliang Liu, Sijia Liu, Masashi Sugiyama, Yang Liu

Both long-tailed and noisily labeled data frequently appear in real-world applications and impose significant challenges for learning.

Fairness

Towards Effective Visual Representations for Partial-Label Learning

1 code implementation CVPR 2023 Shiyu Xia, Jiaqi Lv, Ning Xu, Gang Niu, Xin Geng

Under partial-label learning (PLL) where, for each training instance, only a set of ambiguous candidate labels containing the unknown true label is accessible, contrastive learning has recently boosted the performance of PLL on vision tasks, attributed to representations learned by contrasting the same/different classes of entities.

Contrastive Learning Image Classification +3

Enhancing Label Sharing Efficiency in Complementary-Label Learning with Label Augmentation

no code implementations15 May 2023 Wei-I Lin, Gang Niu, Hsuan-Tien Lin, Masashi Sugiyama

Our analysis reveals that the efficiency of implicit label sharing is closely related to the performance of existing CLL models.

Weakly-supervised Learning

Making Binary Classification from Multiple Unlabeled Datasets Almost Free of Supervision

no code implementations12 Jun 2023 Yuhao Wu, Xiaobo Xia, Jun Yu, Bo Han, Gang Niu, Masashi Sugiyama, Tongliang Liu

Training a classifier exploiting a huge amount of supervised data is expensive or even prohibited in a situation, where the labeling cost is high.

Binary Classification Pseudo Label

A Universal Unbiased Method for Classification from Aggregate Observations

no code implementations20 Jun 2023 Zixi Wei, Lei Feng, Bo Han, Tongliang Liu, Gang Niu, Xiaofeng Zhu, Heng Tao Shen

This motivates the study on classification from aggregate observations (CFAO), where the supervision is provided to groups of instances, instead of individual instances.

Classification Multiple Instance Learning

Diversity-enhancing Generative Network for Few-shot Hypothesis Adaptation

no code implementations12 Jul 2023 Ruijiang Dong, Feng Liu, Haoang Chi, Tongliang Liu, Mingming Gong, Gang Niu, Masashi Sugiyama, Bo Han

In this paper, we propose a diversity-enhancing generative network (DEG-Net) for the FHA problem, which can generate diverse unlabeled data with the help of a kernel independence measure: the Hilbert-Schmidt independence criterion (HSIC).

Multi-Label Knowledge Distillation

1 code implementation ICCV 2023 Penghui Yang, Ming-Kun Xie, Chen-Chen Zong, Lei Feng, Gang Niu, Masashi Sugiyama, Sheng-Jun Huang

Existing knowledge distillation methods typically work by imparting the knowledge of output logits or intermediate feature maps from the teacher network to the student network, which is very successful in multi-class single-label learning.

Binary Classification Knowledge Distillation +1

Atom-Motif Contrastive Transformer for Molecular Property Prediction

no code implementations11 Oct 2023 Wentao Yu, Shuo Chen, Chen Gong, Gang Niu, Masashi Sugiyama

As motifs in a molecule are significant patterns that are of great importance for determining molecular properties (e. g., toxicity and solubility), overlooking motif interactions inevitably hinders the effectiveness of MPP.

Molecular Property Prediction Property Prediction

The Selected-completely-at-random Complementary Label is a Practical Weak Supervision for Multi-class Classification

no code implementations27 Nov 2023 Wei Wang, Takashi Ishida, Yu-Jie Zhang, Gang Niu, Masashi Sugiyama

Complementary-label learning is a weakly supervised learning problem in which each training example is associated with one or multiple complementary labels indicating the classes to which it does not belong.

Binary Classification Multi-class Classification +1

Direct Distillation between Different Domains

no code implementations12 Jan 2024 Jialiang Tang, Shuo Chen, Gang Niu, Hongyuan Zhu, Joey Tianyi Zhou, Chen Gong, Masashi Sugiyama

Then, we build a fusion-activation mechanism to transfer the valuable domain-invariant knowledge to the student network, while simultaneously encouraging the adapter within the teacher network to learn the domain-specific knowledge of the target data.

Domain Adaptation Knowledge Distillation

Generating Chain-of-Thoughts with a Direct Pairwise-Comparison Approach to Searching for the Most Promising Intermediate Thought

no code implementations10 Feb 2024 Zhen-Yu Zhang, Siwei Han, Huaxiu Yao, Gang Niu, Masashi Sugiyama

In this paper, motivated by Vapnik's principle, we propose a novel comparison-based CoT generation algorithm that directly identifies the most promising thoughts with the noisy feedback from the LLM.

Language Modelling Large Language Model

Counterfactual Reasoning for Multi-Label Image Classification via Patching-Based Training

no code implementations9 Apr 2024 Ming-Kun Xie, Jia-Hao Xiao, Pei Peng, Gang Niu, Masashi Sugiyama, Sheng-Jun Huang

In this paper, we provide a causal inference framework to show that the correlative features caused by the target object and its co-occurring objects can be regarded as a mediator, which has both positive and negative impacts on model predictions.

Causal Inference counterfactual +3

Cannot find the paper you are looking for? You can Submit a new open access paper.