Search Results for author: Bo Han

Found 185 papers, 86 papers with code

Masking: A New Perspective of Noisy Supervision

2 code implementations NeurIPS 2018 Bo Han, Jiangchao Yao, Gang Niu, Mingyuan Zhou, Ivor Tsang, Ya zhang, Masashi Sugiyama

It is important to learn various types of classifiers given training data with noisy labels.

Ranked #42 on Image Classification on Clothing1M (using extra training data)

Image Classification

Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels

5 code implementations NeurIPS 2018 Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor Tsang, Masashi Sugiyama

Deep learning with noisy labels is practically challenging, as the capacity of deep models is so high that they can totally memorize these noisy labels sooner or later during training.

Learning with noisy labels Memorization

Learning Causally Invariant Representations for Out-of-Distribution Generalization on Graphs

3 code implementations11 Feb 2022 Yongqiang Chen, Yonggang Zhang, Yatao Bian, Han Yang, Kaili Ma, Binghui Xie, Tongliang Liu, Bo Han, James Cheng

Despite recent success in using the invariance principle for out-of-distribution (OOD) generalization on Euclidean data (e. g., images), studies on graph data are still limited.

Drug Discovery Graph Learning +1

Attacks Which Do Not Kill Training Make Adversarial Learning Stronger

1 code implementation ICML 2020 Jingfeng Zhang, Xilie Xu, Bo Han, Gang Niu, Lizhen Cui, Masashi Sugiyama, Mohan Kankanhalli

Adversarial training based on the minimax formulation is necessary for obtaining adversarial robustness of trained models.

Adversarial Robustness

Contrastive Learning with Boosted Memorization

1 code implementation25 May 2022 Zhihan Zhou, Jiangchao Yao, Yanfeng Wang, Bo Han, Ya zhang

Different from previous works, we explore this direction from an alternative perspective, i. e., the data perspective, and propose a novel Boosted Contrastive Learning (BCL) method.

Contrastive Learning Memorization +2

Are Anchor Points Really Indispensable in Label-Noise Learning?

1 code implementation NeurIPS 2019 Xiaobo Xia, Tongliang Liu, Nannan Wang, Bo Han, Chen Gong, Gang Niu, Masashi Sugiyama

Existing theories have shown that the transition matrix can be learned by exploiting \textit{anchor points} (i. e., data points that belong to a specific class almost surely).

Learning with noisy labels

DeepInception: Hypnotize Large Language Model to Be Jailbreaker

1 code implementation6 Nov 2023 Xuan Li, Zhanke Zhou, Jianing Zhu, Jiangchao Yao, Tongliang Liu, Bo Han

Despite remarkable success in various applications, large language models (LLMs) are vulnerable to adversarial jailbreaks that make the safety guardrails void.

Language Modelling Large Language Model

A Survey of Label-noise Representation Learning: Past, Present and Future

1 code implementation9 Nov 2020 Bo Han, Quanming Yao, Tongliang Liu, Gang Niu, Ivor W. Tsang, James T. Kwok, Masashi Sugiyama

Classical machine learning implicitly assumes that labels of the training data are sampled from a clean distribution, which can be too restrictive for real-world scenarios.

BIG-bench Machine Learning Learning Theory +1

Part-dependent Label Noise: Towards Instance-dependent Label Noise

1 code implementation NeurIPS 2020 Xiaobo Xia, Tongliang Liu, Bo Han, Nannan Wang, Mingming Gong, Haifeng Liu, Gang Niu, DaCheng Tao, Masashi Sugiyama

Learning with the \textit{instance-dependent} label noise is challenging, because it is hard to model such real-world noise.

Geometry-aware Instance-reweighted Adversarial Training

2 code implementations ICLR 2021 Jingfeng Zhang, Jianing Zhu, Gang Niu, Bo Han, Masashi Sugiyama, Mohan Kankanhalli

The belief was challenged by recent studies where we can maintain the robustness and improve the accuracy.

Virtual Homogeneity Learning: Defending against Data Heterogeneity in Federated Learning

1 code implementation6 Jun 2022 Zhenheng Tang, Yonggang Zhang, Shaohuai Shi, Xin He, Bo Han, Xiaowen Chu

In federated learning (FL), model performance typically suffers from client drift induced by data heterogeneity, and mainstream works focus on correcting client drift.

Federated Learning

Detecting Machine-Generated Texts by Multi-Population Aware Optimization for Maximum Mean Discrepancy

1 code implementation25 Feb 2024 Shuhai Zhang, Yiliao Song, Jiahao Yang, Yuanqing Li, Bo Han, Mingkui Tan

Unfortunately, it is challenging to distinguish MGTs and human-written texts because the distributional discrepancy between them is often very subtle due to the remarkable performance of LLMs.

Hallucination Sentence

Understanding and Improving Graph Injection Attack by Promoting Unnoticeability

1 code implementation ICLR 2022 Yongqiang Chen, Han Yang, Yonggang Zhang, Kaili Ma, Tongliang Liu, Bo Han, James Cheng

Recently Graph Injection Attack (GIA) emerges as a practical attack scenario on Graph Neural Networks (GNNs), where the adversary can merely inject few malicious nodes instead of modifying existing nodes or edges, i. e., Graph Modification Attack (GMA).

Detecting Adversarial Data by Probing Multiple Perturbations Using Expected Perturbation Score

1 code implementation25 May 2023 Shuhai Zhang, Feng Liu, Jiahao Yang, Yifan Yang, Changsheng Li, Bo Han, Mingkui Tan

Last, we propose an EPS-based adversarial detection (EPS-AD) method, in which we develop EPS-based maximum mean discrepancy (MMD) as a metric to measure the discrepancy between the test sample and natural samples.

On Strengthening and Defending Graph Reconstruction Attack with Markov Chain Approximation

1 code implementation15 Jun 2023 Zhanke Zhou, Chenyu Zhou, Xuan Li, Jiangchao Yao, Quanming Yao, Bo Han

Although powerful graph neural networks (GNNs) have boosted numerous real-world applications, the potential privacy risk is still underexplored.

Graph Reconstruction Reconstruction Attack

Understanding and Improving Early Stopping for Learning with Noisy Labels

1 code implementation NeurIPS 2021 Yingbin Bai, Erkun Yang, Bo Han, Yanhua Yang, Jiatong Li, Yinian Mao, Gang Niu, Tongliang Liu

Instead of the early stopping, which trains a whole DNN all at once, we initially train former DNN layers by optimizing the DNN with a relatively large number of epochs.

Learning with noisy labels Memorization

Combating Exacerbated Heterogeneity for Robust Models in Federated Learning

1 code implementation1 Mar 2023 Jianing Zhu, Jiangchao Yao, Tongliang Liu, Quanming Yao, Jianliang Xu, Bo Han

Privacy and security concerns in real-world applications have led to the development of adversarially robust federated models.

Federated Learning

Reliable Adversarial Distillation with Unreliable Teachers

2 code implementations ICLR 2022 Jianing Zhu, Jiangchao Yao, Bo Han, Jingfeng Zhang, Tongliang Liu, Gang Niu, Jingren Zhou, Jianliang Xu, Hongxia Yang

However, when considering adversarial robustness, teachers may become unreliable and adversarial distillation may not work: teachers are pretrained on their own adversarial data, and it is too demanding to require that teachers are also good at every adversarial data queried by students.

Adversarial Robustness

Out-of-distribution Detection with Implicit Outlier Transformation

1 code implementation9 Mar 2023 Qizhou Wang, Junjie Ye, Feng Liu, Quanyu Dai, Marcus Kalander, Tongliang Liu, Jianye Hao, Bo Han

It leads to a min-max learning scheme -- searching to synthesize OOD data that leads to worst judgments and learning from such OOD data for uniform performance in OOD detection.

Out-of-Distribution Detection

Combating Representation Learning Disparity with Geometric Harmonization

1 code implementation NeurIPS 2023 Zhihan Zhou, Jiangchao Yao, Feng Hong, Ya zhang, Bo Han, Yanfeng Wang

Self-supervised learning (SSL) as an effective paradigm of representation learning has achieved tremendous success on various curated datasets in diverse scenarios.

Representation Learning Self-Supervised Learning

Maximum Mean Discrepancy Test is Aware of Adversarial Attacks

2 code implementations22 Oct 2020 Ruize Gao, Feng Liu, Jingfeng Zhang, Bo Han, Tongliang Liu, Gang Niu, Masashi Sugiyama

However, it has been shown that the MMD test is unaware of adversarial attacks -- the MMD test failed to detect the discrepancy between natural and adversarial data.

Adversarial Attack Detection

Bilateral Dependency Optimization: Defending Against Model-inversion Attacks

2 code implementations11 Jun 2022 Xiong Peng, Feng Liu, Jingfen Zhang, Long Lan, Junjie Ye, Tongliang Liu, Bo Han

To defend against MI attacks, previous work utilizes a unilateral dependency optimization strategy, i. e., minimizing the dependency between inputs (i. e., features) and outputs (i. e., labels) during training the classifier.

Understanding and Improving Feature Learning for Out-of-Distribution Generalization

1 code implementation NeurIPS 2023 Yongqiang Chen, Wei Huang, Kaiwen Zhou, Yatao Bian, Bo Han, James Cheng

Moreover, when fed the ERM learned features to the OOD objectives, the invariant feature learning quality significantly affects the final OOD performance, as OOD objectives rarely learn new features.

Out-of-Distribution Generalization

FasTer: Fast Tensor Completion with Nonconvex Regularization

1 code implementation23 Jul 2018 Quanming Yao, James T. Kwok, Bo Han

Due to the easy optimization, the convex overlapping nuclear norm has been popularly used for tensor completion.

Variational Imitation Learning with Diverse-quality Demonstrations

1 code implementation ICML 2020 Voot Tangkaratt, Bo Han, Mohammad Emtiyaz Khan, Masashi Sugiyama

Learning from demonstrations can be challenging when the quality of demonstrations is diverse, and even more so when the quality is unknown and there is no additional information to estimate the quality.

Continuous Control Imitation Learning +2

Meta Discovery: Learning to Discover Novel Classes given Very Limited Data

1 code implementation ICLR 2022 Haoang Chi, Feng Liu, Bo Han, Wenjing Yang, Long Lan, Tongliang Liu, Gang Niu, Mingyuan Zhou, Masashi Sugiyama

In this paper, we demystify assumptions behind NCD and find that high-level semantic features should be shared among the seen and unseen classes.

Clustering Meta-Learning +1

AdaProp: Learning Adaptive Propagation for Graph Neural Network based Knowledge Graph Reasoning

2 code implementations30 May 2022 Yongqi Zhang, Zhanke Zhou, Quanming Yao, Xiaowen Chu, Bo Han

An important design component of GNN-based KG reasoning methods is called the propagation path, which contains a set of involved entities in each propagation step.

Knowledge Graphs

Watermarking for Out-of-distribution Detection

1 code implementation27 Oct 2022 Qizhou Wang, Feng Liu, Yonggang Zhang, Jing Zhang, Chen Gong, Tongliang Liu, Bo Han

Out-of-distribution (OOD) detection aims to identify OOD data based on representations extracted from well-trained deep models.

Out-of-Distribution Detection

NAS-LID: Efficient Neural Architecture Search with Local Intrinsic Dimension

1 code implementation23 Nov 2022 Xin He, Jiangchao Yao, Yuxin Wang, Zhenheng Tang, Ka Chu Cheung, Simon See, Bo Han, Xiaowen Chu

One-shot neural architecture search (NAS) substantially improves the search efficiency by training one supernet to estimate the performance of every possible child architecture (i. e., subnet).

Neural Architecture Search

Instance-dependent Label-noise Learning under a Structural Causal Model

2 code implementations NeurIPS 2021 Yu Yao, Tongliang Liu, Mingming Gong, Bo Han, Gang Niu, Kun Zhang

In particular, we show that properly modeling the instances will contribute to the identifiability of the label noise transition matrix and thus lead to a better classifier.

Robust Weight Perturbation for Adversarial Training

1 code implementation30 May 2022 Chaojian Yu, Bo Han, Mingming Gong, Li Shen, Shiming Ge, Bo Du, Tongliang Liu

Based on these observations, we propose a robust perturbation strategy to constrain the extent of weight perturbation.

Classification

Moderately Distributional Exploration for Domain Generalization

1 code implementation27 Apr 2023 Rui Dai, Yonggang Zhang, Zhen Fang, Bo Han, Xinmei Tian

We show that MODE can endow models with provable generalization performance on unknown target domains.

Domain Generalization

Exploiting Class Activation Value for Partial-Label Learning

3 code implementations ICLR 2022 Fei Zhang, Lei Feng, Bo Han, Tongliang Liu, Gang Niu, Tao Qin, Masashi Sugiyama

As the first contribution, we empirically show that the class activation map (CAM), a simple technique for discriminating the learning patterns of each class in images, is surprisingly better at making accurate predictions than the model itself on selecting the true label from candidate labels.

Multi-class Classification Partial Label Learning

Understanding Robust Overfitting of Adversarial Training and Beyond

1 code implementation17 Jun 2022 Chaojian Yu, Bo Han, Li Shen, Jun Yu, Chen Gong, Mingming Gong, Tongliang Liu

Here, we explore the causes of robust overfitting by comparing the data distribution of \emph{non-overfit} (weak adversary) and \emph{overfitted} (strong adversary) adversarial training, and observe that the distribution of the adversarial data generated by weak adversary mainly contain small-loss data.

Adversarial Robustness Data Ablation

Dual T: Reducing Estimation Error for Transition Matrix in Label-noise Learning

1 code implementation NeurIPS 2020 Yu Yao, Tongliang Liu, Bo Han, Mingming Gong, Jiankang Deng, Gang Niu, Masashi Sugiyama

By this intermediate class, the original transition matrix can then be factorized into the product of two easy-to-estimate transition matrices.

Tackling Instance-Dependent Label Noise via a Universal Probabilistic Model

1 code implementation14 Jan 2021 Qizhou Wang, Bo Han, Tongliang Liu, Gang Niu, Jian Yang, Chen Gong

The drastic increase of data quantity often brings the severe decrease of data quality, such as incorrect label annotations, which poses a great challenge for robustly training Deep Neural Networks (DNNs).

Provably End-to-end Label-Noise Learning without Anchor Points

1 code implementation4 Feb 2021 Xuefeng Li, Tongliang Liu, Bo Han, Gang Niu, Masashi Sugiyama

In label-noise learning, the transition matrix plays a key role in building statistically consistent classifiers.

Learning with noisy labels

Probabilistic Margins for Instance Reweighting in Adversarial Training

1 code implementation NeurIPS 2021 Qizhou Wang, Feng Liu, Bo Han, Tongliang Liu, Chen Gong, Gang Niu, Mingyuan Zhou, Masashi Sugiyama

Reweighting adversarial data during training has been recently shown to improve adversarial robustness, where data closer to the current decision boundaries are regarded as more critical and given larger weights.

Adversarial Robustness

Robust Training of Federated Models with Extremely Label Deficiency

2 code implementations22 Feb 2024 Yonggang Zhang, Zhiqin Yang, Xinmei Tian, Nannan Wang, Tongliang Liu, Bo Han

Federated semi-supervised learning (FSSL) has emerged as a powerful paradigm for collaboratively training machine learning models using distributed data with label deficiency.

NoiLIn: Improving Adversarial Training and Correcting Stereotype of Noisy Labels

1 code implementation31 May 2021 Jingfeng Zhang, Xilie Xu, Bo Han, Tongliang Liu, Gang Niu, Lizhen Cui, Masashi Sugiyama

First, we thoroughly investigate noisy labels (NLs) injection into AT's inner maximization and outer minimization, respectively and obtain the observations on when NL injection benefits AT.

Adversarial Robustness

Robust Generalization against Photon-Limited Corruptions via Worst-Case Sharpness Minimization

2 code implementations CVPR 2023 Zhuo Huang, Miaoxi Zhu, Xiaobo Xia, Li Shen, Jun Yu, Chen Gong, Bo Han, Bo Du, Tongliang Liu

Experimentally, we simulate photon-limited corruptions using CIFAR10/100 and ImageNet30 datasets and show that SharpDRO exhibits a strong generalization ability against severe corruptions and exceeds well-known baseline methods with large performance gains.

Hard Sample Matters a Lot in Zero-Shot Quantization

1 code implementation CVPR 2023 Huantong Li, Xiangmiao Wu, Fanbing Lv, Daihai Liao, Thomas H. Li, Yonggang Zhang, Bo Han, Mingkui Tan

Nonetheless, we find that the synthetic samples constructed in existing ZSQ methods can be easily fitted by models.

Quantization

Adjustment and Alignment for Unbiased Open Set Domain Adaptation

1 code implementation CVPR 2023 Wuyang Li, Jie Liu, Bo Han, Yixuan Yuan

In a nutshell, ANNA consists of Front-Door Adjustment (FDA) to correct the biased learning in the source domain and Decoupled Causal Alignment (DCA) to transfer the model unbiasedly.

Domain Adaptation Model Optimization

Enhancing One-Shot Federated Learning Through Data and Ensemble Co-Boosting

1 code implementation23 Feb 2024 Rong Dai, Yonggang Zhang, Ang Li, Tongliang Liu, Xun Yang, Bo Han

These hard samples are then employed to promote the quality of the ensemble model by adjusting the ensembling weights for each client model.

Federated Learning

TOHAN: A One-step Approach towards Few-shot Hypothesis Adaptation

1 code implementation NeurIPS 2021 Haoang Chi, Feng Liu, Wenjing Yang, Long Lan, Tongliang Liu, Bo Han, William K. Cheung, James T. Kwok

To this end, we propose a target orientated hypothesis adaptation network (TOHAN) to solve the FHA problem, where we generate highly-compatible unlabeled data (i. e., an intermediate domain) to help train a target-domain classifier.

Domain Adaptation

EAGAN: Efficient Two-stage Evolutionary Architecture Search for GANs

1 code implementation30 Nov 2021 Guohao Ying, Xin He, Bin Gao, Bo Han, Xiaowen Chu

Some recent works try to search both generator (G) and discriminator (D), but they suffer from the instability of GAN training.

Image Generation Neural Architecture Search +2

Label-Noise Learning with Intrinsically Long-Tailed Data

1 code implementation ICCV 2023 Yang Lu, Yiliang Zhang, Bo Han, Yiu-ming Cheung, Hanzi Wang

In this case, it is hard to distinguish clean samples from noisy samples on the intrinsic tail classes with the unknown intrinsic class distribution.

REMAST: Real-time Emotion-based Music Arrangement with Soft Transition

1 code implementation14 May 2023 ZiHao Wang, Le Ma, Chen Zhang, Bo Han, Yunfei Xu, Yikai Wang, Xinyi Chen, HaoRong Hong, Wenbo Liu, Xinda Wu, Kejun Zhang

Music as an emotional intervention medium has important applications in scenarios such as music therapy, games, and movies.

Regularly Truncated M-estimators for Learning with Noisy Labels

1 code implementation2 Sep 2023 Xiaobo Xia, Pengqian Lu, Chen Gong, Bo Han, Jun Yu, Tongliang Liu

However, such a procedure is arguably debatable from two folds: (a) it does not consider the bad influence of noisy labels in selected small-loss examples; (b) it does not make good use of the discarded large-loss examples, which may be clean or have meaningful information for generalization.

Learning with noisy labels

Combating Bilateral Edge Noise for Robust Link Prediction

1 code implementation NeurIPS 2023 Zhanke Zhou, Jiangchao Yao, Jiaxu Liu, Xiawei Guo, Quanming Yao, Li He, Liang Wang, Bo Zheng, Bo Han

To address this dilemma, we propose an information-theory-guided principle, Robust Graph Information Bottleneck (RGIB), to extract reliable supervision signals and avoid representation collapse.

Denoising Link Prediction +1

Modeling Adversarial Noise for Adversarial Training

1 code implementation21 Sep 2021 Dawei Zhou, Nannan Wang, Bo Han, Tongliang Liu

Deep neural networks have been demonstrated to be vulnerable to adversarial noise, promoting the development of defense against adversarial attacks.

Adversarial Defense

Improving Adversarial Robustness via Mutual Information Estimation

1 code implementation25 Jul 2022 Dawei Zhou, Nannan Wang, Xinbo Gao, Bo Han, Xiaoyu Wang, Yibing Zhan, Tongliang Liu

To alleviate this negative effect, in this paper, we investigate the dependence between outputs of the target model and input adversarial samples from the perspective of information theory, and propose an adversarial defense method.

Adversarial Defense Adversarial Robustness +1

Towards Lightweight Black-Box Attacks against Deep Neural Networks

1 code implementation29 Sep 2022 Chenghao Sun, Yonggang Zhang, Wan Chaoqun, Qizhou Wang, Ya Li, Tongliang Liu, Bo Han, Xinmei Tian

As it is hard to mitigate the approximation error with few available samples, we propose Error TransFormer (ETF) for lightweight attacks.

Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection Capability

1 code implementation6 Jun 2023 Jianing Zhu, Hengzhuang Li, Jiangchao Yao, Tongliang Liu, Jianliang Xu, Bo Han

Based on such insights, we propose a novel method, Unleashing Mask, which aims to restore the OOD discriminative capabilities of the well-trained model with ID data.

Out-of-Distribution Detection

Partition Speeds Up Learning Implicit Neural Representations Based on Exponential-Increase Hypothesis

1 code implementation ICCV 2023 Ke Liu, Feng Liu, Haishuai Wang, Ning Ma, Jiajun Bu, Bo Han

Based on this fact, we introduce a simple partition mechanism to boost the performance of two INR methods for image reconstruction: one for learning INRs, and the other for learning-to-learn INRs.

Image Reconstruction Semantic Segmentation

Learning Diverse-Structured Networks for Adversarial Robustness

1 code implementation3 Feb 2021 Xuefeng Du, Jingfeng Zhang, Bo Han, Tongliang Liu, Yu Rong, Gang Niu, Junzhou Huang, Masashi Sugiyama

In adversarial training (AT), the main focus has been the objective and optimizer while the model has been less studied, so that the models being used are still those classic ones in standard training (ST).

Adversarial Robustness

Learning to Augment Distributions for Out-of-Distribution Detection

1 code implementation NeurIPS 2023 Qizhou Wang, Zhen Fang, Yonggang Zhang, Feng Liu, Yixuan Li, Bo Han

Accordingly, we propose Distributional-Augmented OOD Learning (DAL), alleviating the OOD distribution discrepancy by crafting an OOD distribution set that contains all distributions in a Wasserstein ball centered on the auxiliary OOD distribution.

Learning Theory Out-of-Distribution Detection

Fast and Reliable Evaluation of Adversarial Robustness with Minimum-Margin Attack

1 code implementation15 Jun 2022 Ruize Gao, Jiongxiao Wang, Kaiwen Zhou, Feng Liu, Binghui Xie, Gang Niu, Bo Han, James Cheng

The AutoAttack (AA) has been the most reliable method to evaluate adversarial robustness when considerable computational resources are available.

Adversarial Robustness Computational Efficiency

MCM: Multi-condition Motion Synthesis Framework for Multi-scenario

1 code implementation6 Sep 2023 Zeyu Ling, Bo Han, Yongkang Wong, Mohan Kangkanhalli, Weidong Geng

We also introduce a Transformer-based diffusion model MWNet (DDPM-like) as our main branch that can capture the spatial complexity and inter-joint correlations in motion sequences through a channel-dimension self-attention module.

Motion Synthesis

Detecting Out-of-distribution Data through In-distribution Class Prior

1 code implementation ICML 2023 Xue Jiang, Feng Liu, Zhen Fang, Hong Chen, Tongliang Liu, Feng Zheng, Bo Han

In this paper, we show that this assumption makes the above methods incapable when the ID model is trained with class-imbalanced data. Fortunately, by analyzing the causal relations between ID/OOD classes and features, we identify several common scenarios where the OOD-to-ID probabilities should be the ID-class-prior distribution and propose two strategies to modify existing inference-time detection methods: 1) replace the uniform distribution with the ID-class-prior distribution if they explicitly use the uniform distribution; 2) otherwise, reweight their scores according to the similarity between the ID-class-prior distribution and the softmax outputs of the pre-trained model.

Out-of-Distribution Detection Out of Distribution (OOD) Detection

Out-of-distribution Detection Learning with Unreliable Out-of-distribution Sources

1 code implementation NeurIPS 2023 Haotian Zheng, Qizhou Wang, Zhen Fang, Xiaobo Xia, Feng Liu, Tongliang Liu, Bo Han

To this end, we suggest that generated data (with mistaken OOD generation) can be used to devise an auxiliary OOD detection task to facilitate real OOD detection.

Out-of-Distribution Detection Out of Distribution (OOD) Detection +1

Federated Learning with Extremely Noisy Clients via Negative Distillation

1 code implementation20 Dec 2023 Yang Lu, Lin Chen, Yonggang Zhang, Yiliang Zhang, Bo Han, Yiu-ming Cheung, Hanzi Wang

The model trained on noisy labels serves as a `bad teacher' in knowledge distillation, aiming to decrease the risk of providing incorrect information.

Federated Learning Knowledge Distillation

MCM: Multi-condition Motion Synthesis Framework

1 code implementation19 Apr 2024 Zeyu Ling, Bo Han, Yongkang Wongkan, Han Lin, Mohan Kankanhalli, Weidong Geng

Conditional human motion synthesis (HMS) aims to generate human motion sequences that conform to specific conditions.

Motion Synthesis

Butterfly: One-step Approach towards Wildly Unsupervised Domain Adaptation

1 code implementation19 May 2019 Feng Liu, Jie Lu, Bo Han, Gang Niu, Guangquan Zhang, Masashi Sugiyama

Hence, we consider a new, more realistic and more challenging problem setting, where classifiers have to be trained with noisy labeled data from SD and unlabeled data from TD -- we name it wildly UDA (WUDA).

Unsupervised Domain Adaptation Wildly Unsupervised Domain Adaptation

Latent Class-Conditional Noise Model

1 code implementation19 Feb 2023 Jiangchao Yao, Bo Han, Zhihan Zhou, Ya zhang, Ivor W. Tsang

We solve this problem by introducing a Latent Class-Conditional Noise model (LCCN) to parameterize the noise transition under a Bayesian framework.

Learning with noisy labels

Less is More: One-shot Subgraph Reasoning on Large-scale Knowledge Graphs

1 code implementation15 Mar 2024 Zhanke Zhou, Yongqi Zhang, Jiangchao Yao, Quanming Yao, Bo Han

To deduce new facts on a knowledge graph (KG), a link predictor learns from the graph structure and collects local evidence to find the answer to a given query.

Knowledge Graphs Link Prediction

BadLabel: A Robust Perspective on Evaluating and Enhancing Label-noise Learning

1 code implementation28 May 2023 Jingfeng Zhang, Bo Song, Haohan Wang, Bo Han, Tongliang Liu, Lei Liu, Masashi Sugiyama

To address the challenge posed by BadLabel, we further propose a robust LNL method that perturbs the labels in an adversarial manner at each epoch to make the loss values of clean and noisy labels again distinguishable.

Noise-robust Graph Learning by Estimating and Leveraging Pairwise Interactions

1 code implementation14 Jun 2021 Xuefeng Du, Tian Bian, Yu Rong, Bo Han, Tongliang Liu, Tingyang Xu, Wenbing Huang, Yixuan Li, Junzhou Huang

This paper bridges the gap by proposing a pairwise framework for noisy node classification on graphs, which relies on the PI as a primary learning proxy in addition to the pointwise learning from the noisy node class labels.

Contrastive Learning Graph Learning +2

Federated Noisy Client Learning

1 code implementation24 Jun 2021 Kahou Tam, Li Li, Bo Han, Chengzhong Xu, Huazhu Fu

Federated learning (FL) collaboratively trains a shared global model depending on multiple local clients, while keeping the training data decentralized in order to preserve data privacy.

Federated Learning

Harnessing Out-Of-Distribution Examples via Augmenting Content and Style

1 code implementation7 Jul 2022 Zhuo Huang, Xiaobo Xia, Li Shen, Bo Han, Mingming Gong, Chen Gong, Tongliang Liu

Machine learning models are vulnerable to Out-Of-Distribution (OOD) examples, and such a problem has drawn much attention.

Data Augmentation Disentanglement +3

Towards out-of-distribution generalizable predictions of chemical kinetics properties

1 code implementation4 Oct 2023 ZiHao Wang, Yongqiang Chen, Yang Duan, Weijiang Li, Bo Han, James Cheng, Hanghang Tong

Under this framework, we create comprehensive datasets to benchmark (1) the state-of-the-art ML approaches for reaction prediction in the OOD setting and (2) the state-of-the-art graph OOD methods in kinetics property prediction problems.

Property Prediction

Class-Dependent Label-Noise Learning with Cycle-Consistency Regularization Feature Space

1 code implementation NIPS 2022 De Cheng, Yixiong Ning, Nannan Wang, Xinbo Gao, Heng Yang, Yuxuan Du, Bo Han, Tongliang Liu

We show that the cycle-consistency regularization helps to minimize the volume of the transition matrix T indirectly without exploiting the estimated noisy class posterior, which could further encourage the estimated transition matrix T to converge to its optimal solution.

Exploring Model Dynamics for Accumulative Poisoning Discovery

1 code implementation6 Jun 2023 Jianing Zhu, Xiawei Guo, Jiangchao Yao, Chao Du, Li He, Shuo Yuan, Tongliang Liu, Liang Wang, Bo Han

In this paper, we dive into the perspective of model dynamics and propose a novel information measure, namely, Memorization Discrepancy, to explore the defense via the model-level information.

Memorization

Understanding Fairness Surrogate Functions in Algorithmic Fairness

1 code implementation17 Oct 2023 Wei Yao, Zhanke Zhou, Zhicong Li, Bo Han, Yong liu

To mitigate such bias while achieving comparable accuracy, a promising approach is to introduce surrogate functions of the concerned fairness definition and solve a constrained optimization problem.

Fairness

Matrix Co-completion for Multi-label Classification with Missing Features and Labels

no code implementations23 May 2018 Miao Xu, Gang Niu, Bo Han, Ivor W. Tsang, Zhi-Hua Zhou, Masashi Sugiyama

We consider a challenging multi-label classification problem where both feature matrix $\X$ and label matrix $\Y$ have missing entries.

General Classification Matrix Completion +1

Beyond Unfolding: Exact Recovery of Latent Convex Tensor Decomposition under Reshuffling

no code implementations22 May 2018 Chao Li, Mohammad Emtiyaz Khan, Zhun Sun, Gang Niu, Bo Han, Shengli Xie, Qibin Zhao

Exact recovery of tensor decomposition (TD) methods is a desirable property in both unsupervised learning and scientific data analysis.

Image Steganography Tensor Decomposition

Post-edit Analysis of Collective Biography Generation

no code implementations20 Feb 2017 Bo Han, Will Radford, Anaïs Cadilhac, Art Harol, Andrew Chisholm, Ben Hachey

Text generation is increasingly common but often requires manual post-editing where high precision is critical to end users.

Text Generation

:telephone::person::sailboat::whale::okhand:; or "Call me Ishmael" - How do you translate emoji?

no code implementations7 Nov 2016 Will Radford, Andrew Chisholm, Ben Hachey, Bo Han

We report on an exploratory analysis of Emoji Dick, a project that leverages crowdsourcing to translate Melville's Moby Dick into emoji.

Part-Of-Speech Tagging Translation +1

On the Convergence of A Family of Robust Losses for Stochastic Gradient Descent

no code implementations5 May 2016 Bo Han, Ivor W. Tsang, Ling Chen

The convergence of Stochastic Gradient Descent (SGD) using convex loss functions has been widely studied.

HSR: L1/2 Regularized Sparse Representation for Fast Face Recognition using Hierarchical Feature Selection

no code implementations23 Sep 2014 Bo Han, Bo He, Tingting Sun, Mengmeng Ma, Amaury Lendasse

By employing hierarchical feature selection, we can compress the scale and dimension of global dictionary, which directly contributes to the decrease of computational cost in sparse representation that our approach is strongly rooted in.

Face Recognition feature selection +1

RMSE-ELM: Recursive Model based Selective Ensemble of Extreme Learning Machines for Robustness Improvement

no code implementations9 Aug 2014 Bo Han, Bo He, Mengmeng Ma, Tingting Sun, Tianhong Yan, Amaury Lendasse

It becomes a potential framework to solve robustness issue of ELM for high-dimensional blended data in the future.

LARSEN-ELM: Selective Ensemble of Extreme Learning Machines using LARS for Blended Data

no code implementations9 Aug 2014 Bo Han, Bo He, Rui Nian, Mengmeng Ma, Shujing Zhang, Minghui Li, Amaury Lendasse

Extreme learning machine (ELM) as a neural network algorithm has shown its good performance, such as fast speed, simple structure etc, but also, weak robustness is an unavoidable defect in original ELM for blended data.

Privacy-preserving Stochastic Gradual Learning

no code implementations30 Sep 2018 Bo Han, Ivor W. Tsang, Xiaokui Xiao, Ling Chen, Sai-fu Fung, Celina P. Yu

PRESTIGE bridges private updates of the primal variable (by private sampling) with the gradual curriculum learning (CL).

Privacy Preserving Stochastic Optimization

Photometric Redshift Estimation for Quasars by Integration of KNN and SVM

no code implementations8 Jan 2016 Bo Han, Hongpeng Ding, Yanxia Zhang, Yongheng Zhao

The massive photometric data collected from multiple large-scale sky surveys offer significant opportunities for measuring distances of celestial objects by photometric redshifts.

Instrumentation and Methods for Astrophysics

DATELINE: Deep Plackett-Luce Model with Uncertainty Measurements

no code implementations14 Dec 2018 Bo Han

Then, we present a weighted Plackett-Luce model to solve the second issue, where the weight is a dynamic uncertainty vector measuring the worker quality.

Revisiting Sample Selection Approach to Positive-Unlabeled Learning: Turning Unlabeled Data into Positive rather than Negative

no code implementations29 Jan 2019 Miao Xu, Bingcong Li, Gang Niu, Bo Han, Masashi Sugiyama

May there be a new sample selection method that can outperform the latest importance reweighting method in the deep learning age?

Memorization

Towards Robust ResNet: A Small Step but A Giant Leap

no code implementations28 Feb 2019 Jingfeng Zhang, Bo Han, Laura Wynter, Kian Hsiang Low, Mohan Kankanhalli

Our analytical studies reveal that the step factor h in the Euler method is able to control the robustness of ResNet in both its training and generalization.

VILD: Variational Imitation Learning with Diverse-quality Demonstrations

no code implementations15 Sep 2019 Voot Tangkaratt, Bo Han, Mohammad Emtiyaz Khan, Masashi Sugiyama

However, the quality of demonstrations in reality can be diverse, since it is easier and cheaper to collect demonstrations from a mix of experts and amateurs.

Continuous Control Imitation Learning

SUM: Suboptimal Unitary Multi-task Learning Framework for Spatiotemporal Data Prediction

no code implementations11 Oct 2019 Qichen Li, Jiaxin Pei, Jianding Zhang, Bo Han

However, such a method have relatively weak performance when the task number is small, and we cannot integrate it into non-linear models.

Meta-Learning Multi-Task Learning

Where is the Bottleneck of Adversarial Learning with Unlabeled Data?

no code implementations20 Nov 2019 Jingfeng Zhang, Bo Han, Gang Niu, Tongliang Liu, Masashi Sugiyama

Deep neural networks (DNNs) are incredibly brittle due to adversarial examples.

Learning with Multiple Complementary Labels

no code implementations ICML 2020 Lei Feng, Takuo Kaneko, Bo Han, Gang Niu, Bo An, Masashi Sugiyama

In this paper, we propose a novel problem setting to allow MCLs for each example and two ways for learning with MCLs.

Confidence Scores Make Instance-dependent Label-noise Learning Possible

no code implementations11 Jan 2020 Antonin Berthon, Bo Han, Gang Niu, Tongliang Liu, Masashi Sugiyama

We find with the help of confidence scores, the transition distribution of each instance can be approximately estimated.

Learning with noisy labels

Rethinking Class-Prior Estimation for Positive-Unlabeled Learning

no code implementations ICLR 2022 Yu Yao, Tongliang Liu, Bo Han, Mingming Gong, Gang Niu, Masashi Sugiyama, DaCheng Tao

Hitherto, the distributional-assumption-free CPE methods rely on a critical assumption that the support of the positive data distribution cannot be contained in the support of the negative data distribution.

valid

Multi-Class Classification from Noisy-Similarity-Labeled Data

no code implementations16 Feb 2020 Songhua Wu, Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Nannan Wang, Haifeng Liu, Gang Niu

We further estimate the transition matrix from only noisy data and build a novel learning system to learn a classifier which can assign noise-free class labels for instances.

Classification General Classification +1

Class2Simi: A Noise Reduction Perspective on Learning with Noisy Labels

no code implementations14 Jun 2020 Songhua Wu, Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Nannan Wang, Haifeng Liu, Gang Niu

To give an affirmative answer, in this paper, we propose a framework called Class2Simi: it transforms data points with noisy class labels to data pairs with noisy similarity labels, where a similarity label denotes whether a pair shares the class label or not.

Contrastive Learning Learning with noisy labels +1

Provably Consistent Partial-Label Learning

no code implementations NeurIPS 2020 Lei Feng, Jiaqi Lv, Bo Han, Miao Xu, Gang Niu, Xin Geng, Bo An, Masashi Sugiyama

Partial-label learning (PLL) is a multi-class classification problem, where each training example is associated with a set of candidate labels.

Multi-class Classification Partial Label Learning

ADD-Defense: Towards Defending Widespread Adversarial Examples via Perturbation-Invariant Representation

no code implementations1 Jan 2021 Dawei Zhou, Tongliang Liu, Bo Han, Nannan Wang, Xinbo Gao

Motivated by this observation, we propose a defense framework ADD-Defense, which extracts the invariant information called \textit{perturbation-invariant representation} (PIR) to defend against widespread adversarial examples.

Pointwise Binary Classification with Pairwise Confidence Comparisons

no code implementations5 Oct 2020 Lei Feng, Senlin Shu, Nan Lu, Bo Han, Miao Xu, Gang Niu, Bo An, Masashi Sugiyama

To alleviate the data requirement for training effective binary classifiers in binary classification, many weakly supervised learning settings have been proposed.

Binary Classification Classification +2

Confusable Learning for Large-class Few-Shot Classification

no code implementations6 Nov 2020 Bingcong Li, Bo Han, Zhuowei Wang, Jing Jiang, Guodong Long

Specifically, our method maintains a dynamically updating confusion matrix, which analyzes confusable classes in the dataset.

Classification Few-Shot Image Classification +2

SemiNLL: A Framework of Noisy-Label Learning by Semi-Supervised Learning

no code implementations2 Dec 2020 Zhuowei Wang, Jing Jiang, Bo Han, Lei Feng, Bo An, Gang Niu, Guodong Long

We also instantiate our framework with different combinations, which set the new state of the art on benchmark-simulated and real-world datasets with noisy labels.

Learning with noisy labels

Extended T: Learning with Mixed Closed-set and Open-set Noisy Labels

no code implementations2 Dec 2020 Xiaobo Xia, Tongliang Liu, Bo Han, Nannan Wang, Jiankang Deng, Jiatong Li, Yinian Mao

The traditional transition matrix is limited to model closed-set label noise, where noisy training data has true class labels within the noisy label set.

Understanding the Interaction of Adversarial Training with Noisy Labels

no code implementations6 Feb 2021 Jianing Zhu, Jingfeng Zhang, Bo Han, Tongliang Liu, Gang Niu, Hongxia Yang, Mohan Kankanhalli, Masashi Sugiyama

A recent adversarial training (AT) study showed that the number of projected gradient descent (PGD) steps to successfully attack a point (i. e., find an adversarial example in its proximity) is an effective measure of the robustness of this point.

Learning with Group Noise

no code implementations17 Mar 2021 Qizhou Wang, Jiangchao Yao, Chen Gong, Tongliang Liu, Mingming Gong, Hongxia Yang, Bo Han

Most of the previous approaches in this area focus on the pairwise relation (casual or correlational relationship) with noise, such as learning with noisy labels.

Learning with noisy labels Relation

Device-Cloud Collaborative Learning for Recommendation

no code implementations14 Apr 2021 Jiangchao Yao, Feng Wang, Kunyang Jia, Bo Han, Jingren Zhou, Hongxia Yang

With the rapid development of storage and computing power on mobile devices, it becomes critical and popular to deploy models on devices to save onerous communication latencies and to capture real-time features.

Estimating Instance-dependent Bayes-label Transition Matrix using a Deep Neural Network

no code implementations27 May 2021 Shuo Yang, Erkun Yang, Bo Han, Yang Liu, Min Xu, Gang Niu, Tongliang Liu

Motivated by that classifiers mostly output Bayes optimal labels for prediction, in this paper, we study to directly model the transition from Bayes optimal labels to noisy labels (i. e., Bayes-label transition matrix (BLTM)) and learn a classifier to predict Bayes optimal labels.

Instance Correction for Learning with Open-set Noisy Labels

no code implementations1 Jun 2021 Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Jun Yu, Gang Niu, Masashi Sugiyama

Lots of approaches, e. g., loss correction and label correction, cannot handle such open-set noisy labels well, since they need training data and test data to share the same label space, which does not hold for learning with open-set noisy labels.

Sample Selection with Uncertainty of Losses for Learning with Noisy Labels

no code implementations NeurIPS 2021 Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Jun Yu, Gang Niu, Masashi Sugiyama

In this way, we also give large-loss but less selected data a try; then, we can better distinguish between the cases (a) and (b) by seeing if the losses effectively decrease with the uncertainty after the try.

Learning with noisy labels

Towards Defending against Adversarial Examples via Attack-Invariant Features

no code implementations9 Jun 2021 Dawei Zhou, Tongliang Liu, Bo Han, Nannan Wang, Chunlei Peng, Xinbo Gao

However, given the continuously evolving attacks, models trained on seen types of adversarial examples generally cannot generalize well to unseen types of adversarial examples.

Adversarial Robustness

Improving White-box Robustness of Pre-processing Defenses via Joint Adversarial Training

no code implementations10 Jun 2021 Dawei Zhou, Nannan Wang, Xinbo Gao, Bo Han, Jun Yu, Xiaoyu Wang, Tongliang Liu

However, pre-processing methods may suffer from the robustness degradation effect, in which the defense reduces rather than improving the adversarial robustness of a target model in a white-box setting.

Adversarial Defense Adversarial Robustness

KRADA: Known-region-aware Domain Alignment for Open-set Domain Adaptation in Semantic Segmentation

1 code implementation11 Jun 2021 Chenhong Zhou, Feng Liu, Chen Gong, Rongfei Zeng, Tongliang Liu, William K. Cheung, Bo Han

However, in an open world, the unlabeled test images probably contain unknown categories and have different distributions from the labeled images.

Domain Adaptation Segmentation +1

Local Reweighting for Adversarial Training

no code implementations30 Jun 2021 Ruize Gao, Feng Liu, Kaiwen Zhou, Gang Niu, Bo Han, James Cheng

However, when tested on attacks different from the given attack simulated in training, the robustness may drop significantly (e. g., even worse than no reweighting).

MC$^2$-SF: Slow-Fast Learning for Mobile-Cloud Collaborative Recommendation

no code implementations25 Sep 2021 Zeyuan Chen, Jiangchao Yao, Feng Wang, Kunyang Jia, Bo Han, Wei zhang, Hongxia Yang

With the hardware development of mobile devices, it is possible to build the recommendation models on the mobile side to utilize the fine-grained features and the real-time feedbacks.

Can Label-Noise Transition Matrix Help to Improve Sample Selection and Label Correction?

no code implementations29 Sep 2021 Yu Yao, Xuefeng Li, Tongliang Liu, Alan Blair, Mingming Gong, Bo Han, Gang Niu, Masashi Sugiyama

Existing methods for learning with noisy labels can be generally divided into two categories: (1) sample selection and label correction based on the memorization effect of neural networks; (2) loss correction with the transition matrix.

Learning with noisy labels Memorization

Modeling Adversarial Noise for Adversarial Defense

no code implementations29 Sep 2021 Dawei Zhou, Nannan Wang, Bo Han, Tongliang Liu

Deep neural networks have been demonstrated to be vulnerable to adversarial noise, promoting the development of defense against adversarial attacks.

Adversarial Defense

$\alpha$-Weighted Federated Adversarial Training

no code implementations29 Sep 2021 Jianing Zhu, Jiangchao Yao, Tongliang Liu, Kunyang Jia, Jingren Zhou, Bo Han, Hongxia Yang

Federated Adversarial Training (FAT) helps us address the data privacy and governance issues, meanwhile maintains the model robustness to the adversarial attack.

Adversarial Attack Federated Learning

PI-GNN: Towards Robust Semi-Supervised Node Classification against Noisy Labels

no code implementations29 Sep 2021 Xuefeng Du, Tian Bian, Yu Rong, Bo Han, Tongliang Liu, Tingyang Xu, Wenbing Huang, Junzhou Huang

Semi-supervised node classification on graphs is a fundamental problem in graph mining that uses a small set of labeled nodes and many unlabeled nodes for training, so that its performance is quite sensitive to the quality of the node labels.

Graph Mining Node Classification

Co-variance: Tackling Noisy Labels with Sample Selection by Emphasizing High-variance Examples

no code implementations29 Sep 2021 Xiaobo Xia, Bo Han, Yibing Zhan, Jun Yu, Mingming Gong, Chen Gong, Tongliang Liu

The sample selection approach is popular in learning with noisy labels, which tends to select potentially clean data out of noisy data for robust training.

Learning with noisy labels

Universal Semi-Supervised Learning

no code implementations NeurIPS 2021 Zhuo Huang, Chao Xue, Bo Han, Jian Yang, Chen Gong

Universal Semi-Supervised Learning (UniSSL) aims to solve the open-set problem where both the class distribution (i. e., class set) and feature distribution (i. e., feature domain) are different between labeled dataset and unlabeled dataset.

Domain Adaptation

Pumpout: A Meta Approach for Robustly Training Deep Neural Networks with Noisy Labels

no code implementations27 Sep 2018 Bo Han, Gang Niu, Jiangchao Yao, Xingrui Yu, Miao Xu, Ivor Tsang, Masashi Sugiyama

To handle these issues, by using the memorization effects of deep neural networks, we may train deep neural networks on the whole dataset only the first few iterations.

Memorization

Wildly Unsupervised Domain Adaptation and Its Powerful and Efficient Solution

no code implementations25 Sep 2019 Feng Liu, Jie Lu, Bo Han, Gang Niu, Guangquan Zhang, Masashi Sugiyama

Hence, we consider a new, more realistic and more challenging problem setting, where classifiers have to be trained with noisy labeled data from SD and unlabeled data from TD---we name it wildly UDA (WUDA).

Unsupervised Domain Adaptation Wildly Unsupervised Domain Adaptation

Class2Simi: A New Perspective on Learning with Label Noise

no code implementations28 Sep 2020 Songhua Wu, Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Nannan Wang, Haifeng Liu, Gang Niu

It is worthwhile to perform the transformation: We prove that the noise rate for the noisy similarity labels is lower than that of the noisy class labels, because similarity labels themselves are robust to noise.

DeepMix: Mobility-aware, Lightweight, and Hybrid 3D Object Detection for Headsets

no code implementations15 Jan 2022 Yongjie Guan, Xueyu Hou, Nan Wu, Bo Han, Tao Han

In this paper, we propose DeepMix, a mobility-aware, lightweight, and hybrid 3D object detection framework for improving the user experience of AR/MR on mobile headsets.

3D Object Detection Mixed Reality +2

Low-rank Tensor Learning with Nonconvex Overlapped Nuclear Norm Regularization

no code implementations6 May 2022 Quanming Yao, Yaqing Wang, Bo Han, James Kwok

While the optimization problem is nonconvex and nonsmooth, we show that its critical points still have good statistical performance on the tensor completion problem.

Pluralistic Image Completion with Probabilistic Mixture-of-Experts

no code implementations18 May 2022 Xiaobo Xia, Wenhao Yang, Jie Ren, Yewen Li, Yibing Zhan, Bo Han, Tongliang Liu

Second, the constraints for diversity are designed to be task-agnostic, which causes the constraints to not work well.

FedNoiL: A Simple Two-Level Sampling Method for Federated Learning with Noisy Labels

no code implementations20 May 2022 Zhuowei Wang, Tianyi Zhou, Guodong Long, Bo Han, Jing Jiang

Federated learning (FL) aims at training a global model on the server side while the training data are collected and located at the local devices.

Federated Learning Learning with noisy labels

Counterfactual Fairness with Partially Known Causal Graph

no code implementations27 May 2022 Aoqi Zuo, Susan Wei, Tongliang Liu, Bo Han, Kun Zhang, Mingming Gong

Interestingly, we find that counterfactual fairness can be achieved as if the true causal graph were fully known, when specific background knowledge is provided: the sensitive attributes do not have ancestors in the causal graph.

BIG-bench Machine Learning Causal Inference +2

Instance-Dependent Label-Noise Learning with Manifold-Regularized Transition Matrix Estimation

no code implementations CVPR 2022 De Cheng, Tongliang Liu, Yixiong Ning, Nannan Wang, Bo Han, Gang Niu, Xinbo Gao, Masashi Sugiyama

In label-noise learning, estimating the transition matrix has attracted more and more attention as the matrix plays an important role in building statistically consistent classifiers.

Efficient Private SCO for Heavy-Tailed Data via Clipping

no code implementations27 Jun 2022 Chenhan Jin, Kaiwen Zhou, Bo Han, Ming-Chang Yang, James Cheng

In this paper, we resolve this issue and derive the first high-probability bounds for the private stochastic method with clipping.

Device-Cloud Collaborative Recommendation via Meta Controller

no code implementations7 Jul 2022 Jiangchao Yao, Feng Wang, Xichen Ding, Shaohu Chen, Bo Han, Jingren Zhou, Hongxia Yang

To overcome this issue, we propose a meta controller to dynamically manage the collaboration between the on-device recommender and the cloud-based recommender, and introduce a novel efficient sample construction from the causal perspective to solve the dataset absence issue of meta controller.

counterfactual Device-Cloud Collaboration

Strength-Adaptive Adversarial Training

no code implementations4 Oct 2022 Chaojian Yu, Dawei Zhou, Li Shen, Jun Yu, Bo Han, Mingming Gong, Nannan Wang, Tongliang Liu

Firstly, applying a pre-specified perturbation budget on networks of various model capacities will yield divergent degree of robustness disparity between natural and robust accuracies, which deviates from robust network's desideratum.

Adversarial Robustness Scheduling

Is Out-of-Distribution Detection Learnable?

no code implementations26 Oct 2022 Zhen Fang, Yixuan Li, Jie Lu, Jiahua Dong, Bo Han, Feng Liu

Based on this observation, we next give several necessary and sufficient conditions to characterize the learnability of OOD detection in some practical scenarios.

Learning Theory Out-of-Distribution Detection +2

Zero3D: Semantic-Driven Multi-Category 3D Shape Generation

no code implementations31 Jan 2023 Bo Han, Yitong Fu, Yixuan Shen

Semantic-driven 3D shape generation aims to generate 3D objects conditioned on text.

3D Shape Generation

Towards Zero-trust Security for the Metaverse

no code implementations17 Feb 2023 Ruizhi Cheng, Songqing Chen, Bo Han

By focusing on immersive interaction among users, the burgeoning Metaverse can be viewed as a natural extension of existing social media.

Federated Learning

Exploit CAM by itself: Complementary Learning System for Weakly Supervised Semantic Segmentation

no code implementations4 Mar 2023 Jiren Mai, Fei Zhang, Junjie Ye, Marcus Kalander, Xian Zhang, Wankou Yang, Tongliang Liu, Bo Han

Motivated by this simple but effective learning pattern, we propose a General-Specific Learning Mechanism (GSLM) to explicitly drive a coarse-grained CAM to a fine-grained pseudo mask.

General Knowledge Hippocampus +2

Federated Semi-Supervised Learning with Annotation Heterogeneity

no code implementations4 Mar 2023 Xinyi Shang, Gang Huang, Yang Lu, Jian Lou, Bo Han, Yiu-ming Cheung, Hanzi Wang

Federated Semi-Supervised Learning (FSSL) aims to learn a global model from different clients in an environment with both labeled and unlabeled data.

Towards Efficient Task-Driven Model Reprogramming with Foundation Models

no code implementations5 Apr 2023 Shoukai Xu, Jiangchao Yao, Ran Luo, Shuhai Zhang, Zihao Lian, Mingkui Tan, Bo Han, YaoWei Wang

Moreover, the data used for pretraining foundation models are usually invisible and very different from the target data of downstream tasks.

Knowledge Distillation Transfer Learning

SketchFFusion: Sketch-guided image editing with diffusion model

no code implementations6 Apr 2023 Weihang Mao, Bo Han, ZiHao Wang

Sketch-guided image editing aims to achieve local fine-tuning of the image based on the sketch information provided by the user, while maintaining the original status of the unedited areas.

Making Binary Classification from Multiple Unlabeled Datasets Almost Free of Supervision

no code implementations12 Jun 2023 Yuhao Wu, Xiaobo Xia, Jun Yu, Bo Han, Gang Niu, Masashi Sugiyama, Tongliang Liu

Training a classifier exploiting a huge amount of supervised data is expensive or even prohibited in a situation, where the labeling cost is high.

Binary Classification Pseudo Label

A Universal Unbiased Method for Classification from Aggregate Observations

no code implementations20 Jun 2023 Zixi Wei, Lei Feng, Bo Han, Tongliang Liu, Gang Niu, Xiaofeng Zhu, Heng Tao Shen

This motivates the study on classification from aggregate observations (CFAO), where the supervision is provided to groups of instances, instead of individual instances.

Classification Multiple Instance Learning

Unleashing the Potential of Regularization Strategies in Learning with Noisy Labels

no code implementations11 Jul 2023 Hui Kang, Sheng Liu, Huaxi Huang, Jun Yu, Bo Han, Dadong Wang, Tongliang Liu

In recent years, research on learning with noisy labels has focused on devising novel algorithms that can achieve robustness to noisy training labels while generalizing to clean data.

Learning with noisy labels

Diversity-enhancing Generative Network for Few-shot Hypothesis Adaptation

no code implementations12 Jul 2023 Ruijiang Dong, Feng Liu, Haoang Chi, Tongliang Liu, Mingming Gong, Gang Niu, Masashi Sugiyama, Bo Han

In this paper, we propose a diversity-enhancing generative network (DEG-Net) for the FHA problem, which can generate diverse unlabeled data with the help of a kernel independence measure: the Hilbert-Schmidt independence criterion (HSIC).

Exploiting Counter-Examples for Active Learning with Partial labels

no code implementations14 Jul 2023 Fei Zhang, Yunjie Ye, Lei Feng, Zhongwen Rao, Jieming Zhu, Marcus Kalander, Chen Gong, Jianye Hao, Bo Han

In this setting, an oracle annotates the query samples with partial labels, relaxing the oracle from the demanding accurate labeling process.

Active Learning

Holistic Label Correction for Noisy Multi-Label Classification

no code implementations ICCV 2023 Xiaobo Xia, Jiankang Deng, Wei Bao, Yuxuan Du, Bo Han, Shiguang Shan, Tongliang Liu

The issues are, that we do not understand why label dependence is helpful in the problem, and how to learn and utilize label dependence only using training data with noisy multiple labels.

Classification Memorization +1

Combating Noisy Labels with Sample Selection by Mining High-Discrepancy Examples

no code implementations ICCV 2023 Xiaobo Xia, Bo Han, Yibing Zhan, Jun Yu, Mingming Gong, Chen Gong, Tongliang Liu

As selected data have high discrepancies in probabilities, the divergence of two networks can be maintained by training on such data.

Learning with noisy labels

On the Onset of Robust Overfitting in Adversarial Training

no code implementations1 Oct 2023 Chaojian Yu, Xiaolong Shi, Jun Yu, Bo Han, Tongliang Liu

Adversarial Training (AT) is a widely-used algorithm for building robust neural networks, but it suffers from the issue of robust overfitting, the fundamental mechanism of which remains unclear.

Adversarial Robustness Data Augmentation

On the Over-Memorization During Natural, Robust and Catastrophic Overfitting

1 code implementation13 Oct 2023 Runqi Lin, Chaojian Yu, Bo Han, Tongliang Liu

In this work, we adopt a unified perspective by solely focusing on natural patterns to explore different types of overfitting.

Memorization

Neural Atoms: Propagating Long-range Interaction in Molecular Graphs through Efficient Communication Channel

no code implementations2 Nov 2023 Xuan Li, Zhanke Zhou, Jiangchao Yao, Yu Rong, Lu Zhang, Bo Han

To tackle this issue, we propose a method to abstract the collective information of atomic groups into a few $\textit{Neural Atoms}$ by implicitly projecting the atoms of a molecular.

Drug Discovery

Positional Information Matters for Invariant In-Context Learning: A Case Study of Simple Function Classes

no code implementations30 Nov 2023 Yongqiang Chen, Binghui Xie, Kaiwen Zhou, Bo Han, Yatao Bian, James Cheng

Surprisingly, DeepSet outperforms transformers across a variety of distribution shifts, implying that preserving permutation invariance symmetry to input demonstrations is crucial for OOD ICL.

In-Context Learning

Mixture Data for Training Cannot Ensure Out-of-distribution Generalization

no code implementations25 Dec 2023 Songming Zhang, Yuxiao Luo, Qizhou Wang, Haoang Chi, Xiaofeng Chen, Bo Han, Jinyan Li

Deep neural networks often face generalization problems to handle out-of-distribution (OOD) data, and there remains a notable theoretical gap between the contributing factors and their respective impacts.

Data Augmentation Out-of-Distribution Generalization

Enhancing Evolving Domain Generalization through Dynamic Latent Representations

no code implementations16 Jan 2024 Binghui Xie, Yongqiang Chen, Jiaqi Wang, Kaiwen Zhou, Bo Han, Wei Meng, James Cheng

However, in non-stationary tasks where new domains evolve in an underlying continuous structure, such as time, merely extracting the invariant features is insufficient for generalization to the evolving new domains.

Evolving Domain Generalization

Enhancing Neural Subset Selection: Integrating Background Information into Set Representations

no code implementations5 Feb 2024 Binghui Xie, Yatao Bian, Kaiwen Zhou, Yongqiang Chen, Peilin Zhao, Bo Han, Wei Meng, James Cheng

Learning neural subset selection tasks, such as compound selection in AI-aided drug discovery, have become increasingly pivotal across diverse applications.

Drug Discovery

Discovery of the Hidden World with Large Language Models

no code implementations6 Feb 2024 Chenxi Liu, Yongqiang Chen, Tongliang Liu, Mingming Gong, James Cheng, Bo Han, Kun Zhang

The rise of large language models (LLMs) that are trained to learn rich knowledge from the massive observations of the world, provides a new opportunity to assist with discovering high-level hidden variables from the raw observational data.

Causal Discovery

FedImpro: Measuring and Improving Client Update in Federated Learning

no code implementations10 Feb 2024 Zhenheng Tang, Yonggang Zhang, Shaohuai Shi, Xinmei Tian, Tongliang Liu, Bo Han, Xiaowen Chu

First, we analyze the generalization contribution of local training and conclude that this generalization contribution is bounded by the conditional Wasserstein distance between the data distribution of different clients.

Federated Learning

Mitigating Label Noise on Graph via Topological Sample Selection

no code implementations4 Mar 2024 Yuhao Wu, Jiangchao Yao, Xiaobo Xia, Jun Yu, Ruxin Wang, Bo Han, Tongliang Liu

Despite the success of the carefully-annotated benchmarks, the effectiveness of existing graph neural networks (GNNs) can be considerably impaired in practice when the real-world graph data is noisily labeled.

Learning with noisy labels

NoiseDiffusion: Correcting Noise for Image Interpolation with Diffusion Models beyond Spherical Linear Interpolation

1 code implementation13 Mar 2024 Pengfei Zheng, Yonggang Zhang, Zhen Fang, Tongliang Liu, Defu Lian, Bo Han

Hence, NoiseDiffusion performs interpolation within the noisy image space and injects raw images into these noisy counterparts to address the challenge of information loss.

Denoising

Do CLIPs Always Generalize Better than ImageNet Models?

no code implementations18 Mar 2024 Qizhou Wang, Yong Lin, Yongqiang Chen, Ludwig Schmidt, Bo Han, Tong Zhang

The performance drops from the common to counter groups quantify the reliance of models on spurious features (i. e., backgrounds) to predict the animals.

Tackling Noisy Labels with Network Parameter Additive Decomposition

no code implementations20 Mar 2024 Jingyi Wang, Xiaobo Xia, Long Lan, Xinghao Wu, Jun Yu, Wenjing Yang, Bo Han, Tongliang Liu

Given data with noisy labels, over-parameterized deep networks suffer overfitting mislabeled data, resulting in poor generalization.

Memorization

Few-Shot Adversarial Prompt Learning on Vision-Language Models

no code implementations21 Mar 2024 Yiwei Zhou, Xiaobo Xia, Zhiwei Lin, Bo Han, Tongliang Liu

The vulnerability of deep neural networks to imperceptible adversarial perturbations has attracted widespread attention.

Adversarial Robustness Adversarial Text

Negative Label Guided OOD Detection with Pretrained Vision-Language Models

1 code implementation29 Mar 2024 Xue Jiang, Feng Liu, Zhen Fang, Hong Chen, Tongliang Liu, Feng Zheng, Bo Han

In this paper, we propose a novel post hoc OOD detection method, called NegLabel, which takes a vast number of negative labels from extensive corpus databases.

Out of Distribution (OOD) Detection

On the Learnability of Out-of-distribution Detection

no code implementations7 Apr 2024 Zhen Fang, Yixuan Li, Feng Liu, Bo Han, Jie Lu

Based on this observation, we next give several necessary and sufficient conditions to characterize the learnability of OOD detection in some practical scenarios.

Learning Theory Out-of-Distribution Detection +2

Cannot find the paper you are looking for? You can Submit a new open access paper.