no code implementations • ICML 2020 • QUANMING YAO, Hansi Yang, Bo Han, Gang Niu, James Kwok
Sample selection approaches are popular in robust learning from noisy labels.
1 code implementation • ICML 2020 • Voot Tangkaratt, Bo Han, Mohammad Emtiyaz Khan, Masashi Sugiyama
Learning from demonstrations can be challenging when the quality of demonstrations is diverse, and even more so when the quality is unknown and there is no additional information to estimate the quality.
no code implementations • 13 Sep 2024 • Hangyu Li, Yihan Xu, Jiangchao Yao, Nannan Wang, Xinbo Gao, Bo Han
Then, we transform the facial expression representation to a neutral representation by simulating the difference in text embeddings from textual facial expression to textual neutral.
Facial Expression Recognition Facial Expression Recognition (FER)
no code implementations • 20 Jul 2024 • Bo Han, Heqing Zou, Haoyang Li, Guangcong Wang, Chng Eng Siong
The cascaded conditional diffusion model decomposes the complex talking editing task into two flexible generation tasks, which provides a generalizable talking-face representation, seamless audio-visual transitions, and identity-preserved faces on a small dataset.
no code implementations • 4 Jul 2024 • Yang Wei, Shuo Chen, Shanshan Ye, Bo Han, Chen Gong
To address the challenge, we propose a novel unified learning framework called "Feature and Label Recovery" (FLR) to combat the hybrid noise from the perspective of data recovery, where we concurrently reconstruct both the feature matrix and the label matrix of input data.
no code implementations • 13 Jun 2024 • Qizhou Wang, Bo Han, Puning Yang, Jianing Zhu, Tongliang Liu, Masashi Sugiyama
The compelling goal of eradicating undesirable data behaviors, while preserving usual model functioning, underscores the significance of machine unlearning within the domain of large language models (LLMs).
no code implementations • 12 Jun 2024 • Jianing Zhu, Bo Han, Jiangchao Yao, Jianliang Xu, Gang Niu, Masashi Sugiyama
Previous studies showed that class-wise unlearning is successful in forgetting the knowledge of a target class, through gradient ascent on the forgetting data or fine-tuning with the remaining data.
1 code implementation • 12 Jun 2024 • Yongqiang Chen, Yatao Bian, Bo Han, James Cheng
Extracting the desired interpretable subgraph requires an accurate approximation of SubMT, yet we find that the existing XGNNs can have a huge gap in fitting SubMT.
1 code implementation • 2 Jun 2024 • Chentao Cao, Zhun Zhong, Zhanke Zhou, Yang Liu, Tongliang Liu, Bo Han
In this paper, we propose to tackle this constraint by leveraging the expert knowledge and reasoning capability of large language models (LLM) to Envision potential Outlier Exposure, termed EOE, without access to any actual OOD data.
no code implementations • 30 May 2024 • Yuhao Wu, Jiangchao Yao, Bo Han, Lina Yao, Tongliang Liu
While Positive-Unlabeled (PU) learning is vital in many real-world scenarios, its application to graph data still remains under-explored.
1 code implementation • 29 May 2024 • Hongduan Tian, Feng Liu, Tongliang Liu, Bo Du, Yiu-ming Cheung, Bo Han
In cross-domain few-shot classification, \emph{nearest centroid classifier} (NCC) aims to learn representations to construct a metric space where few-shot classification can be performed by measuring the similarities between samples and the prototype of each class.
1 code implementation • NeurIPS 2023 • Ziqing Fan, Ruipeng Zhang, Jiangchao Yao, Bo Han, Ya zhang, Yanfeng Wang
Partially class-disjoint data (PCDD), a common yet under-explored data formation where each client contributes a part of classes (instead of all classes) of samples, severely challenges the performance of federated algorithms.
1 code implementation • CVPR 2024 • Zihua Zhao, Mengxi Chen, Tianjie Dai, Jiangchao Yao, Bo Han, Ya zhang, Yanfeng Wang
Prior approaches to leverage such data mainly consider the application of uni-modal noisy label learning without amending the impact on both cross-modal and intra-modal geometrical structures in multimodal learning.
1 code implementation • 25 May 2024 • Runqi Lin, Chaojian Yu, Bo Han, Hang Su, Tongliang Liu
Catastrophic overfitting (CO) presents a significant challenge in single-step adversarial training (AT), manifesting as highly distorted deep neural networks (DNNs) that are vulnerable to multi-step adversarial attacks.
no code implementations • 16 May 2024 • Kunda Yan, Sen Cui, Abudukelimu Wuerkaixi, Jingfeng Zhang, Bo Han, Gang Niu, Masashi Sugiyama, ChangShui Zhang
Our framework aims to approximate an optimal cooperation network for each client by optimizing a weighted sum of model similarity and feature complementarity.
no code implementations • 23 Apr 2024 • Yikun Zhang, Geyan Ye, Chaohao Yuan, Bo Han, Long-Kai Huang, Jianhua Yao, Wei Liu, Yu Rong
We design a Hierarchical Adaptive Alignment model to concurrently learn the fine-grained fragment correspondence between two modalities and align these representations of fragments in three levels.
1 code implementation • 19 Apr 2024 • Zeyu Ling, Bo Han, Yongkang Wongkan, Han Lin, Mohan Kankanhalli, Weidong Geng
Conditional human motion synthesis (HMS) aims to generate human motion sequences that conform to specific conditions.
Ranked #3 on Motion Synthesis on HumanML3D
no code implementations • 7 Apr 2024 • Zhen Fang, Yixuan Li, Feng Liu, Bo Han, Jie Lu
Based on this observation, we next give several necessary and sufficient conditions to characterize the learnability of OOD detection in some practical scenarios.
1 code implementation • 29 Mar 2024 • Xue Jiang, Feng Liu, Zhen Fang, Hong Chen, Tongliang Liu, Feng Zheng, Bo Han
In this paper, we propose a novel post hoc OOD detection method, called NegLabel, which takes a vast number of negative labels from extensive corpus databases.
no code implementations • 21 Mar 2024 • Yiwei Zhou, Xiaobo Xia, Zhiwei Lin, Bo Han, Tongliang Liu
The vulnerability of deep neural networks to imperceptible adversarial perturbations has attracted widespread attention.
1 code implementation • 20 Mar 2024 • Jingyi Wang, Xiaobo Xia, Long Lan, Xinghao Wu, Jun Yu, Wenjing Yang, Bo Han, Tongliang Liu
Given data with noisy labels, over-parameterized deep networks suffer overfitting mislabeled data, resulting in poor generalization.
no code implementations • 18 Mar 2024 • Qizhou Wang, Yong Lin, Yongqiang Chen, Ludwig Schmidt, Bo Han, Tong Zhang
The performance drops from the common to counter groups quantify the reliance of models on spurious features (i. e., backgrounds) to predict the animals.
1 code implementation • 15 Mar 2024 • Zhanke Zhou, Yongqi Zhang, Jiangchao Yao, Quanming Yao, Bo Han
To deduce new facts on a knowledge graph (KG), a link predictor learns from the graph structure and collects local evidence to find the answer to a given query.
1 code implementation • 13 Mar 2024 • Pengfei Zheng, Yonggang Zhang, Zhen Fang, Tongliang Liu, Defu Lian, Bo Han
Hence, NoiseDiffusion performs interpolation within the noisy image space and injects raw images into these noisy counterparts to address the challenge of information loss.
2 code implementations • 4 Mar 2024 • Yuhao Wu, Jiangchao Yao, Xiaobo Xia, Jun Yu, Ruxin Wang, Bo Han, Tongliang Liu
Despite the success of the carefully-annotated benchmarks, the effectiveness of existing graph neural networks (GNNs) can be considerably impaired in practice when the real-world graph data is noisily labeled.
1 code implementation • 25 Feb 2024 • Shuhai Zhang, Yiliao Song, Jiahao Yang, Yuanqing Li, Bo Han, Mingkui Tan
Unfortunately, it is challenging to distinguish MGTs and human-written texts because the distributional discrepancy between them is often very subtle due to the remarkable performance of LLMs.
1 code implementation • 23 Feb 2024 • Rong Dai, Yonggang Zhang, Ang Li, Tongliang Liu, Xun Yang, Bo Han
These hard samples are then employed to promote the quality of the ensemble model by adjusting the ensembling weights for each client model.
2 code implementations • 22 Feb 2024 • Yonggang Zhang, Zhiqin Yang, Xinmei Tian, Nannan Wang, Tongliang Liu, Bo Han
Federated semi-supervised learning (FSSL) has emerged as a powerful paradigm for collaboratively training machine learning models using distributed data with label deficiency.
no code implementations • 10 Feb 2024 • Zhenheng Tang, Yonggang Zhang, Shaohuai Shi, Xinmei Tian, Tongliang Liu, Bo Han, Xiaowen Chu
First, we analyze the generalization contribution of local training and conclude that this generalization contribution is bounded by the conditional Wasserstein distance between the data distribution of different clients.
no code implementations • 6 Feb 2024 • Chenxi Liu, Yongqiang Chen, Tongliang Liu, Mingming Gong, James Cheng, Bo Han, Kun Zhang
The rise of large language models (LLMs) that are trained to learn rich knowledge from the massive observations of the world, provides a new opportunity to assist with discovering high-level hidden variables from the raw observational data.
no code implementations • 5 Feb 2024 • Binghui Xie, Yatao Bian, Kaiwen Zhou, Yongqiang Chen, Peilin Zhao, Bo Han, Wei Meng, James Cheng
Learning neural subset selection tasks, such as compound selection in AI-aided drug discovery, have become increasingly pivotal across diverse applications.
no code implementations • 16 Jan 2024 • Binghui Xie, Yongqiang Chen, Jiaqi Wang, Kaiwen Zhou, Bo Han, Wei Meng, James Cheng
However, in non-stationary tasks where new domains evolve in an underlying continuous structure, such as time, merely extracting the invariant features is insufficient for generalization to the evolving new domains.
no code implementations • 25 Dec 2023 • Songming Zhang, Yuxiao Luo, Qizhou Wang, Haoang Chi, Xiaofeng Chen, Bo Han, Jinyan Li
Deep neural networks often face generalization problems to handle out-of-distribution (OOD) data, and there remains a notable theoretical gap between the contributing factors and their respective impacts.
1 code implementation • 20 Dec 2023 • Yang Lu, Lin Chen, Yonggang Zhang, Yiliang Zhang, Bo Han, Yiu-ming Cheung, Hanzi Wang
The model trained on noisy labels serves as a `bad teacher' in knowledge distillation, aiming to decrease the risk of providing incorrect information.
no code implementations • 30 Nov 2023 • Yongqiang Chen, Binghui Xie, Kaiwen Zhou, Bo Han, Yatao Bian, James Cheng
Surprisingly, DeepSet outperforms transformers across a variety of distribution shifts, implying that preserving permutation invariance symmetry to input demonstrations is crucial for OOD ICL.
1 code implementation • NeurIPS 2023 • Haotian Zheng, Qizhou Wang, Zhen Fang, Xiaobo Xia, Feng Liu, Tongliang Liu, Bo Han
To this end, we suggest that generated data (with mistaken OOD generation) can be used to devise an auxiliary OOD detection task to facilitate real OOD detection.
Out-of-Distribution Detection Out of Distribution (OOD) Detection +1
1 code implementation • 6 Nov 2023 • Xuan Li, Zhanke Zhou, Jianing Zhu, Jiangchao Yao, Tongliang Liu, Bo Han
Despite remarkable success in various applications, large language models (LLMs) are vulnerable to adversarial jailbreaks that make the safety guardrails void.
1 code implementation • NeurIPS 2023 • Qizhou Wang, Zhen Fang, Yonggang Zhang, Feng Liu, Yixuan Li, Bo Han
Accordingly, we propose Distributional-Augmented OOD Learning (DAL), alleviating the OOD distribution discrepancy by crafting an OOD distribution set that contains all distributions in a Wasserstein ball centered on the auxiliary OOD distribution.
1 code implementation • 2 Nov 2023 • Xuan Li, Zhanke Zhou, Jiangchao Yao, Yu Rong, Lu Zhang, Bo Han
To tackle this issue, we propose a method to abstract the collective information of atomic groups into a few $\textit{Neural Atoms}$ by implicitly projecting the atoms of a molecular.
1 code implementation • NeurIPS 2023 • Zhanke Zhou, Jiangchao Yao, Jiaxu Liu, Xiawei Guo, Quanming Yao, Li He, Liang Wang, Bo Zheng, Bo Han
To address this dilemma, we propose an information-theory-guided principle, Robust Graph Information Bottleneck (RGIB), to extract reliable supervision signals and avoid representation collapse.
1 code implementation • NeurIPS 2023 • Yongqiang Chen, Yatao Bian, Kaiwen Zhou, Binghui Xie, Bo Han, James Cheng
Invariant graph representation learning aims to learn the invariance among data from different environments for out-of-distribution generalization on graphs.
1 code implementation • NeurIPS 2023 • Zhihan Zhou, Jiangchao Yao, Feng Hong, Ya zhang, Bo Han, Yanfeng Wang
Self-supervised learning (SSL) as an effective paradigm of representation learning has achieved tremendous success on various curated datasets in diverse scenarios.
no code implementations • 25 Oct 2023 • Zhuo Huang, Muyang Li, Li Shen, Jun Yu, Chen Gong, Bo Han, Tongliang Liu
By fully exploring both variant and invariant parameters, our EVIL can effectively identify a robust subnetwork to improve OOD generalization.
1 code implementation • NeurIPS 2023 • Zhuo Huang, Li Shen, Jun Yu, Bo Han, Tongliang Liu
Therefore, the label guidance on labeled data is hard to be propagated to unlabeled data.
1 code implementation • ICCV 2023 • Ke Liu, Feng Liu, Haishuai Wang, Ning Ma, Jiajun Bu, Bo Han
Based on this fact, we introduce a simple partition mechanism to boost the performance of two INR methods for image reconstruction: one for learning INRs, and the other for learning-to-learn INRs.
1 code implementation • 17 Oct 2023 • Wei Yao, Zhanke Zhou, Zhicong Li, Bo Han, Yong liu
To mitigate such bias while achieving comparable accuracy, a promising approach is to introduce surrogate functions of the concerned fairness definition and solve a constrained optimization problem.
1 code implementation • 13 Oct 2023 • Runqi Lin, Chaojian Yu, Bo Han, Tongliang Liu
In this work, we adopt a unified perspective by solely focusing on natural patterns to explore different types of overfitting.
2 code implementations • NeurIPS 2023 • Zhiqin Yang, Yonggang Zhang, Yu Zheng, Xinmei Tian, Hao Peng, Tongliang Liu, Bo Han
Comprehensive experiments demonstrate the efficacy of FedFed in promoting model performance.
1 code implementation • 4 Oct 2023 • ZiHao Wang, Yongqiang Chen, Yang Duan, Weijiang Li, Bo Han, James Cheng, Hanghang Tong
Under this framework, we create comprehensive datasets to benchmark (1) the state-of-the-art ML approaches for reaction prediction in the OOD setting and (2) the state-of-the-art graph OOD methods in kinetics property prediction problems.
no code implementations • 1 Oct 2023 • Chaojian Yu, Xiaolong Shi, Jun Yu, Bo Han, Tongliang Liu
Given that the only difference between adversarial and natural training lies in the inclusion of adversarial perturbations, we further hypothesize that adversarial perturbations degrade the generalization of features in natural data and verify this hypothesis through extensive experiments.
1 code implementation • 6 Sep 2023 • Zeyu Ling, Bo Han, Yongkang Wong, Mohan Kangkanhalli, Weidong Geng
We also introduce a Transformer-based diffusion model MWNet (DDPM-like) as our main branch that can capture the spatial complexity and inter-joint correlations in motion sequences through a channel-dimension self-attention module.
1 code implementation • 2 Sep 2023 • Xiaobo Xia, Pengqian Lu, Chen Gong, Bo Han, Jun Yu, Tongliang Liu
However, such a procedure is arguably debatable from two folds: (a) it does not consider the bad influence of noisy labels in selected small-loss examples; (b) it does not make good use of the discarded large-loss examples, which may be clean or have meaningful information for generalization.
no code implementations • 14 Jul 2023 • Fei Zhang, Yunjie Ye, Lei Feng, Zhongwen Rao, Jieming Zhu, Marcus Kalander, Chen Gong, Jianye Hao, Bo Han
In this setting, an oracle annotates the query samples with partial labels, relaxing the oracle from the demanding accurate labeling process.
no code implementations • 12 Jul 2023 • Ruijiang Dong, Feng Liu, Haoang Chi, Tongliang Liu, Mingming Gong, Gang Niu, Masashi Sugiyama, Bo Han
In this paper, we propose a diversity-enhancing generative network (DEG-Net) for the FHA problem, which can generate diverse unlabeled data with the help of a kernel independence measure: the Hilbert-Schmidt independence criterion (HSIC).
no code implementations • 11 Jul 2023 • Hui Kang, Sheng Liu, Huaxi Huang, Jun Yu, Bo Han, Dadong Wang, Tongliang Liu
In recent years, research on learning with noisy labels has focused on devising novel algorithms that can achieve robustness to noisy training labels while generalizing to clean data.
no code implementations • 20 Jun 2023 • Zixi Wei, Lei Feng, Bo Han, Tongliang Liu, Gang Niu, Xiaofeng Zhu, Heng Tao Shen
This motivates the study on classification from aggregate observations (CFAO), where the supervision is provided to groups of instances, instead of individual instances.
1 code implementation • 15 Jun 2023 • Zhanke Zhou, Chenyu Zhou, Xuan Li, Jiangchao Yao, Quanming Yao, Bo Han
Although powerful graph neural networks (GNNs) have boosted numerous real-world applications, the potential privacy risk is still underexplored.
no code implementations • 12 Jun 2023 • Yuhao Wu, Xiaobo Xia, Jun Yu, Bo Han, Gang Niu, Masashi Sugiyama, Tongliang Liu
Training a classifier exploiting a huge amount of supervised data is expensive or even prohibited in a situation, where the labeling cost is high.
1 code implementation • 6 Jun 2023 • Jianing Zhu, Xiawei Guo, Jiangchao Yao, Chao Du, Li He, Shuo Yuan, Tongliang Liu, Liang Wang, Bo Han
In this paper, we dive into the perspective of model dynamics and propose a novel information measure, namely, Memorization Discrepancy, to explore the defense via the model-level information.
1 code implementation • 6 Jun 2023 • Jianing Zhu, Hengzhuang Li, Jiangchao Yao, Tongliang Liu, Jianliang Xu, Bo Han
Based on such insights, we propose a novel method, Unleashing Mask, which aims to restore the OOD discriminative capabilities of the well-trained model with ID data.
1 code implementation • 28 May 2023 • Jingfeng Zhang, Bo Song, Haohan Wang, Bo Han, Tongliang Liu, Lei Liu, Masashi Sugiyama
To address the challenge posed by BadLabel, we further propose a robust LNL method that perturbs the labels in an adversarial manner at each epoch to make the loss values of clean and noisy labels again distinguishable.
1 code implementation • 25 May 2023 • Shuhai Zhang, Feng Liu, Jiahao Yang, Yifan Yang, Changsheng Li, Bo Han, Mingkui Tan
Last, we propose an EPS-based adversarial detection (EPS-AD) method, in which we develop EPS-based maximum mean discrepancy (MMD) as a metric to measure the discrepancy between the test sample and natural samples.
1 code implementation • 14 May 2023 • ZiHao Wang, Le Ma, Chen Zhang, Bo Han, Yunfei Xu, Yikai Wang, Xinyi Chen, HaoRong Hong, Wenbo Liu, Xinda Wu, Kejun Zhang
Music as an emotional intervention medium has important applications in scenarios such as music therapy, games, and movies.
1 code implementation • 27 Apr 2023 • Rui Dai, Yonggang Zhang, Zhen Fang, Bo Han, Xinmei Tian
We show that MODE can endow models with provable generalization performance on unknown target domains.
1 code implementation • ICML 2023 • Xue Jiang, Feng Liu, Zhen Fang, Hong Chen, Tongliang Liu, Feng Zheng, Bo Han
In this paper, we show that this assumption makes the above methods incapable when the ID model is trained with class-imbalanced data. Fortunately, by analyzing the causal relations between ID/OOD classes and features, we identify several common scenarios where the OOD-to-ID probabilities should be the ID-class-prior distribution and propose two strategies to modify existing inference-time detection methods: 1) replace the uniform distribution with the ID-class-prior distribution if they explicitly use the uniform distribution; 2) otherwise, reweight their scores according to the similarity between the ID-class-prior distribution and the softmax outputs of the pre-trained model.
Out-of-Distribution Detection Out of Distribution (OOD) Detection
1 code implementation • NeurIPS 2023 • Yongqiang Chen, Wei Huang, Kaiwen Zhou, Yatao Bian, Bo Han, James Cheng
Moreover, when fed the ERM learned features to the OOD objectives, the invariant feature learning quality significantly affects the final OOD performance, as OOD objectives rarely learn new features.
no code implementations • 6 Apr 2023 • Weihang Mao, Bo Han, ZiHao Wang
Sketch-guided image editing aims to achieve local fine-tuning of the image based on the sketch information provided by the user, while maintaining the original status of the unedited areas.
no code implementations • 5 Apr 2023 • Shoukai Xu, Jiangchao Yao, Ran Luo, Shuhai Zhang, Zihao Lian, Mingkui Tan, Bo Han, YaoWei Wang
Moreover, the data used for pretraining foundation models are usually invisible and very different from the target data of downstream tasks.
1 code implementation • CVPR 2023 • Huantong Li, Xiangmiao Wu, Fanbing Lv, Daihai Liao, Thomas H. Li, Yonggang Zhang, Bo Han, Mingkui Tan
Nonetheless, we find that the synthetic samples constructed in existing ZSQ methods can be easily fitted by models.
4 code implementations • CVPR 2023 • Zhuo Huang, Miaoxi Zhu, Xiaobo Xia, Li Shen, Jun Yu, Chen Gong, Bo Han, Bo Du, Tongliang Liu
Experimentally, we simulate photon-limited corruptions using CIFAR10/100 and ImageNet30 datasets and show that SharpDRO exhibits a strong generalization ability against severe corruptions and exceeds well-known baseline methods with large performance gains.
1 code implementation • 9 Mar 2023 • Qizhou Wang, Junjie Ye, Feng Liu, Quanyu Dai, Marcus Kalander, Tongliang Liu, Jianye Hao, Bo Han
It leads to a min-max learning scheme -- searching to synthesize OOD data that leads to worst judgments and learning from such OOD data for uniform performance in OOD detection.
no code implementations • 4 Mar 2023 • Xinyi Shang, Gang Huang, Yang Lu, Jian Lou, Bo Han, Yiu-ming Cheung, Hanzi Wang
Federated Semi-Supervised Learning (FSSL) aims to learn a global model from different clients in an environment with both labeled and unlabeled data.
no code implementations • 4 Mar 2023 • Jiren Mai, Fei Zhang, Junjie Ye, Marcus Kalander, Xian Zhang, Wankou Yang, Tongliang Liu, Bo Han
Motivated by this simple but effective learning pattern, we propose a General-Specific Learning Mechanism (GSLM) to explicitly drive a coarse-grained CAM to a fine-grained pseudo mask.
1 code implementation • 1 Mar 2023 • Jianing Zhu, Jiangchao Yao, Tongliang Liu, Quanming Yao, Jianliang Xu, Bo Han
Privacy and security concerns in real-world applications have led to the development of adversarially robust federated models.
1 code implementation • 19 Feb 2023 • Jiangchao Yao, Bo Han, Zhihan Zhou, Ya zhang, Ivor W. Tsang
We solve this problem by introducing a Latent Class-Conditional Noise model (LCCN) to parameterize the noise transition under a Bayesian framework.
no code implementations • 17 Feb 2023 • Ruizhi Cheng, Songqing Chen, Bo Han
By focusing on immersive interaction among users, the burgeoning Metaverse can be viewed as a natural extension of existing social media.
no code implementations • journal 2023 • Zhuo Huang, Xiaobo Xia, Li Shen, Jun Yu, Chen Gong, Bo Han, Tongliang Liu
Robust generalization aims to deal with the most challenging data distributions which are rarely presented in training set and contain severe noise corruptions.
no code implementations • 31 Jan 2023 • Bo Han, Yitong Fu, Yixuan Shen
Semantic-driven 3D shape generation aims to generate 3D objects conditioned on text.
no code implementations • ICCV 2023 • Xiaobo Xia, Jiankang Deng, Wei Bao, Yuxuan Du, Bo Han, Shiguang Shan, Tongliang Liu
The issues are, that we do not understand why label dependence is helpful in the problem, and how to learn and utilize label dependence only using training data with noisy multiple labels.
1 code implementation • CVPR 2023 • Wuyang Li, Jie Liu, Bo Han, Yixuan Yuan
In a nutshell, ANNA consists of Front-Door Adjustment (FDA) to correct the biased learning in the source domain and Decoupled Causal Alignment (DCA) to transfer the model unbiasedly.
no code implementations • ICCV 2023 • Xiaobo Xia, Bo Han, Yibing Zhan, Jun Yu, Mingming Gong, Chen Gong, Tongliang Liu
As selected data have high discrepancies in probabilities, the divergence of two networks can be maintained by training on such data.
1 code implementation • 23 Nov 2022 • Xin He, Jiangchao Yao, Yuxin Wang, Zhenheng Tang, Ka Chu Cheung, Simon See, Bo Han, Xiaowen Chu
One-shot neural architecture search (NAS) substantially improves the search efficiency by training one supernet to estimate the performance of every possible child architecture (i. e., subnet).
Ranked #26 on Neural Architecture Search on NAS-Bench-201, CIFAR-10
1 code implementation • NIPS 2022 • De Cheng, Yixiong Ning, Nannan Wang, Xinbo Gao, Heng Yang, Yuxuan Du, Bo Han, Tongliang Liu
We show that the cycle-consistency regularization helps to minimize the volume of the transition matrix T indirectly without exploiting the estimated noisy class posterior, which could further encourage the estimated transition matrix T to converge to its optimal solution.
1 code implementation • 1 Nov 2022 • Jianan Zhou, Jianing Zhu, Jingfeng Zhang, Tongliang Liu, Gang Niu, Bo Han, Masashi Sugiyama
Adversarial training (AT) with imperfect supervision is significant but receives limited attention.
1 code implementation • 27 Oct 2022 • Qizhou Wang, Feng Liu, Yonggang Zhang, Jing Zhang, Chen Gong, Tongliang Liu, Bo Han
Out-of-distribution (OOD) detection aims to identify OOD data based on representations extracted from well-trained deep models.
Ranked #20 on Out-of-Distribution Detection on ImageNet-1k vs Places
no code implementations • 26 Oct 2022 • Zhen Fang, Yixuan Li, Jie Lu, Jiahua Dong, Bo Han, Feng Liu
Based on this observation, we next give several necessary and sufficient conditions to characterize the learnability of OOD detection in some practical scenarios.
no code implementations • 4 Oct 2022 • Chaojian Yu, Dawei Zhou, Li Shen, Jun Yu, Bo Han, Mingming Gong, Nannan Wang, Tongliang Liu
Firstly, applying a pre-specified perturbation budget on networks of various model capacities will yield divergent degree of robustness disparity between natural and robust accuracies, which deviates from robust network's desideratum.
1 code implementation • 29 Sep 2022 • Chenghao Sun, Yonggang Zhang, Wan Chaoqun, Qizhou Wang, Ya Li, Tongliang Liu, Bo Han, Xinmei Tian
As it is hard to mitigate the approximation error with few available samples, we propose Error TransFormer (ETF) for lightweight attacks.
1 code implementation • ICCV 2023 • Yang Lu, Yiliang Zhang, Bo Han, Yiu-ming Cheung, Hanzi Wang
In this case, it is hard to distinguish clean samples from noisy samples on the intrinsic tail classes with the unknown intrinsic class distribution.
1 code implementation • 25 Jul 2022 • Dawei Zhou, Nannan Wang, Xinbo Gao, Bo Han, Xiaoyu Wang, Yibing Zhan, Tongliang Liu
To alleviate this negative effect, in this paper, we investigate the dependence between outputs of the target model and input adversarial samples from the perspective of information theory, and propose an adversarial defense method.
no code implementations • 7 Jul 2022 • Jiangchao Yao, Feng Wang, Xichen Ding, Shaohu Chen, Bo Han, Jingren Zhou, Hongxia Yang
To overcome this issue, we propose a meta controller to dynamically manage the collaboration between the on-device recommender and the cloud-based recommender, and introduce a novel efficient sample construction from the causal perspective to solve the dataset absence issue of meta controller.
1 code implementation • 7 Jul 2022 • Zhuo Huang, Xiaobo Xia, Li Shen, Bo Han, Mingming Gong, Chen Gong, Tongliang Liu
Machine learning models are vulnerable to Out-Of-Distribution (OOD) examples, and such a problem has drawn much attention.
1 code implementation • 27 Jun 2022 • Chenhan Jin, Kaiwen Zhou, Bo Han, James Cheng, Tieyong Zeng
We consider stochastic convex optimization for heavy-tailed data with the guarantee of being differentially private (DP).
1 code implementation • 17 Jun 2022 • Chaojian Yu, Bo Han, Li Shen, Jun Yu, Chen Gong, Mingming Gong, Tongliang Liu
Here, we explore the causes of robust overfitting by comparing the data distribution of \emph{non-overfit} (weak adversary) and \emph{overfitted} (strong adversary) adversarial training, and observe that the distribution of the adversarial data generated by weak adversary mainly contain small-loss data.
1 code implementation • 15 Jun 2022 • Ruize Gao, Jiongxiao Wang, Kaiwen Zhou, Feng Liu, Binghui Xie, Gang Niu, Bo Han, James Cheng
The AutoAttack (AA) has been the most reliable method to evaluate adversarial robustness when considerable computational resources are available.
3 code implementations • 15 Jun 2022 • Yongqiang Chen, Kaiwen Zhou, Yatao Bian, Binghui Xie, Bingzhe Wu, Yonggang Zhang, Kaili Ma, Han Yang, Peilin Zhao, Bo Han, James Cheng
Recently, there has been a growing surge of interest in enabling machine learning systems to generalize well to Out-of-Distribution (OOD) data.
2 code implementations • 11 Jun 2022 • Xiong Peng, Feng Liu, Jingfen Zhang, Long Lan, Junjie Ye, Tongliang Liu, Bo Han
To defend against MI attacks, previous work utilizes a unilateral dependency optimization strategy, i. e., minimizing the dependency between inputs (i. e., features) and outputs (i. e., labels) during training the classifier.
no code implementations • CVPR 2022 • De Cheng, Tongliang Liu, Yixiong Ning, Nannan Wang, Bo Han, Gang Niu, Xinbo Gao, Masashi Sugiyama
In label-noise learning, estimating the transition matrix has attracted more and more attention as the matrix plays an important role in building statistically consistent classifiers.
1 code implementation • 6 Jun 2022 • Zhenheng Tang, Yonggang Zhang, Shaohuai Shi, Xin He, Bo Han, Xiaowen Chu
In federated learning (FL), model performance typically suffers from client drift induced by data heterogeneity, and mainstream works focus on correcting client drift.
no code implementations • 4 Jun 2022 • Yingbin Bai, Erkun Yang, Zhaoqing Wang, Yuxuan Du, Bo Han, Cheng Deng, Dadong Wang, Tongliang Liu
With the training going on, the model begins to overfit noisy pairs.
1 code implementation • 30 May 2022 • Chaojian Yu, Bo Han, Mingming Gong, Li Shen, Shiming Ge, Bo Du, Tongliang Liu
Based on these observations, we propose a robust perturbation strategy to constrain the extent of weight perturbation.
2 code implementations • 30 May 2022 • Yongqi Zhang, Zhanke Zhou, Quanming Yao, Xiaowen Chu, Bo Han
An important design component of GNN-based KG reasoning methods is called the propagation path, which contains a set of involved entities in each propagation step.
no code implementations • 27 May 2022 • Aoqi Zuo, Susan Wei, Tongliang Liu, Bo Han, Kun Zhang, Mingming Gong
Interestingly, we find that counterfactual fairness can be achieved as if the true causal graph were fully known, when specific background knowledge is provided: the sensitive attributes do not have ancestors in the causal graph.
1 code implementation • 25 May 2022 • Zhihan Zhou, Jiangchao Yao, Yanfeng Wang, Bo Han, Ya zhang
Different from previous works, we explore this direction from an alternative perspective, i. e., the data perspective, and propose a novel Boosted Contrastive Learning (BCL) method.
no code implementations • 20 May 2022 • Zhuowei Wang, Tianyi Zhou, Guodong Long, Bo Han, Jing Jiang
Federated learning (FL) aims at training a global model on the server side while the training data are collected and located at the local devices.
no code implementations • 18 May 2022 • Xiaobo Xia, Wenhao Yang, Jie Ren, Yewen Li, Yibing Zhan, Bo Han, Tongliang Liu
Second, the constraints for diversity are designed to be task-agnostic, which causes the constraints to not work well.
no code implementations • 6 May 2022 • Quanming Yao, Yaqing Wang, Bo Han, James Kwok
While the optimization problem is nonconvex and nonsmooth, we show that its critical points still have good statistical performance on the tensor completion problem.
1 code implementation • ICLR 2022 • Yongqiang Chen, Han Yang, Yonggang Zhang, Kaili Ma, Tongliang Liu, Bo Han, James Cheng
Recently Graph Injection Attack (GIA) emerges as a practical attack scenario on Graph Neural Networks (GNNs), where the adversary can merely inject few malicious nodes instead of modifying existing nodes or edges, i. e., Graph Modification Attack (GMA).
3 code implementations • 11 Feb 2022 • Yongqiang Chen, Yonggang Zhang, Yatao Bian, Han Yang, Kaili Ma, Binghui Xie, Tongliang Liu, Bo Han, James Cheng
Despite recent success in using the invariance principle for out-of-distribution (OOD) generalization on Euclidean data (e. g., images), studies on graph data are still limited.
no code implementations • 30 Jan 2022 • Yexiong Lin, Yu Yao, Yuxuan Du, Jun Yu, Bo Han, Mingming Gong, Tongliang Liu
Algorithms which minimize the averaged loss have been widely designed for dealing with noisy labels.
no code implementations • 15 Jan 2022 • Yongjie Guan, Xueyu Hou, Nan Wu, Bo Han, Tao Han
In this paper, we propose DeepMix, a mobility-aware, lightweight, and hybrid 3D object detection framework for improving the user experience of AR/MR on mobile headsets.
no code implementations • NeurIPS 2021 • Zhuo Huang, Chao Xue, Bo Han, Jian Yang, Chen Gong
Universal Semi-Supervised Learning (UniSSL) aims to solve the open-set problem where both the class distribution (i. e., class set) and feature distribution (i. e., feature domain) are different between labeled dataset and unlabeled dataset.
1 code implementation • 30 Nov 2021 • Guohao Ying, Xin He, Bin Gao, Bo Han, Xiaowen Chu
Some recent works try to search both generator (G) and discriminator (D), but they suffer from the instability of GAN training.
Ranked #14 on Image Generation on STL-10
no code implementations • 29 Sep 2021 • Jianing Zhu, Jiangchao Yao, Tongliang Liu, Kunyang Jia, Jingren Zhou, Bo Han, Hongxia Yang
Federated Adversarial Training (FAT) helps us address the data privacy and governance issues, meanwhile maintains the model robustness to the adversarial attack.
no code implementations • 29 Sep 2021 • Xiaobo Xia, Bo Han, Yibing Zhan, Jun Yu, Mingming Gong, Chen Gong, Tongliang Liu
The sample selection approach is popular in learning with noisy labels, which tends to select potentially clean data out of noisy data for robust training.
3 code implementations • ICLR 2022 • Fei Zhang, Lei Feng, Bo Han, Tongliang Liu, Gang Niu, Tao Qin, Masashi Sugiyama
As the first contribution, we empirically show that the class activation map (CAM), a simple technique for discriminating the learning patterns of each class in images, is surprisingly better at making accurate predictions than the model itself on selecting the true label from candidate labels.
no code implementations • 29 Sep 2021 • Yu Yao, Xuefeng Li, Tongliang Liu, Alan Blair, Mingming Gong, Bo Han, Gang Niu, Masashi Sugiyama
Existing methods for learning with noisy labels can be generally divided into two categories: (1) sample selection and label correction based on the memorization effect of neural networks; (2) loss correction with the transition matrix.
no code implementations • 29 Sep 2021 • Xuefeng Du, Tian Bian, Yu Rong, Bo Han, Tongliang Liu, Tingyang Xu, Wenbing Huang, Junzhou Huang
Semi-supervised node classification on graphs is a fundamental problem in graph mining that uses a small set of labeled nodes and many unlabeled nodes for training, so that its performance is quite sensitive to the quality of the node labels.
no code implementations • 29 Sep 2021 • Dawei Zhou, Nannan Wang, Bo Han, Tongliang Liu
Deep neural networks have been demonstrated to be vulnerable to adversarial noise, promoting the development of defense against adversarial attacks.
no code implementations • 27 Sep 2021 • Yujie Pan, Jiangchao Yao, Bo Han, Kunyang Jia, Ya zhang, Hongxia Yang
Click-through rate (CTR) prediction becomes indispensable in ubiquitous web recommendation applications.
no code implementations • 25 Sep 2021 • Zeyuan Chen, Jiangchao Yao, Feng Wang, Kunyang Jia, Bo Han, Wei zhang, Hongxia Yang
With the hardware development of mobile devices, it is possible to build the recommendation models on the mobile side to utilize the fine-grained features and the real-time feedbacks.
1 code implementation • 21 Sep 2021 • Dawei Zhou, Nannan Wang, Bo Han, Tongliang Liu
Deep neural networks have been demonstrated to be vulnerable to adversarial noise, promoting the development of defense against adversarial attacks.
2 code implementations • NeurIPS 2021 • Yu Yao, Tongliang Liu, Mingming Gong, Bo Han, Gang Niu, Kun Zhang
In particular, we show that properly modeling the instances will contribute to the identifiability of the label noise transition matrix and thus lead to a better classifier.
1 code implementation • NeurIPS 2021 • Yingbin Bai, Erkun Yang, Bo Han, Yanhua Yang, Jiatong Li, Yinian Mao, Gang Niu, Tongliang Liu
Instead of the early stopping, which trains a whole DNN all at once, we initially train former DNN layers by optimizing the DNN with a relatively large number of epochs.
Ranked #8 on Learning with noisy labels on CIFAR-10N-Aggregate
no code implementations • 30 Jun 2021 • Ruize Gao, Feng Liu, Kaiwen Zhou, Gang Niu, Bo Han, James Cheng
However, when tested on attacks different from the given attack simulated in training, the robustness may drop significantly (e. g., even worse than no reweighting).
1 code implementation • 24 Jun 2021 • Kahou Tam, Li Li, Bo Han, Chengzhong Xu, Huazhu Fu
Federated learning (FL) collaboratively trains a shared global model depending on multiple local clients, while keeping the training data decentralized in order to preserve data privacy.
1 code implementation • NeurIPS 2021 • Qizhou Wang, Feng Liu, Bo Han, Tongliang Liu, Chen Gong, Gang Niu, Mingyuan Zhou, Masashi Sugiyama
Reweighting adversarial data during training has been recently shown to improve adversarial robustness, where data closer to the current decision boundaries are regarded as more critical and given larger weights.
1 code implementation • 14 Jun 2021 • Xuefeng Du, Tian Bian, Yu Rong, Bo Han, Tongliang Liu, Tingyang Xu, Wenbing Huang, Yixuan Li, Junzhou Huang
This paper bridges the gap by proposing a pairwise framework for noisy node classification on graphs, which relies on the PI as a primary learning proxy in addition to the pointwise learning from the noisy node class labels.
1 code implementation • ICLR 2022 • Yonggang Zhang, Mingming Gong, Tongliang Liu, Gang Niu, Xinmei Tian, Bo Han, Bernhard Schölkopf, Kun Zhang
The adversarial vulnerability of deep neural networks has attracted significant attention in machine learning.
1 code implementation • NeurIPS 2021 • Haoang Chi, Feng Liu, Wenjing Yang, Long Lan, Tongliang Liu, Bo Han, William K. Cheung, James T. Kwok
To this end, we propose a target orientated hypothesis adaptation network (TOHAN) to solve the FHA problem, where we generate highly-compatible unlabeled data (i. e., an intermediate domain) to help train a target-domain classifier.
1 code implementation • 11 Jun 2021 • Chenhong Zhou, Feng Liu, Chen Gong, Rongfei Zeng, Tongliang Liu, William K. Cheung, Bo Han
However, in an open world, the unlabeled test images probably contain unknown categories and have different distributions from the labeled images.
no code implementations • 10 Jun 2021 • Dawei Zhou, Nannan Wang, Xinbo Gao, Bo Han, Jun Yu, Xiaoyu Wang, Tongliang Liu
However, pre-processing methods may suffer from the robustness degradation effect, in which the defense reduces rather than improving the adversarial robustness of a target model in a white-box setting.
2 code implementations • ICLR 2022 • Jianing Zhu, Jiangchao Yao, Bo Han, Jingfeng Zhang, Tongliang Liu, Gang Niu, Jingren Zhou, Jianliang Xu, Hongxia Yang
However, when considering adversarial robustness, teachers may become unreliable and adversarial distillation may not work: teachers are pretrained on their own adversarial data, and it is too demanding to require that teachers are also good at every adversarial data queried by students.
no code implementations • 9 Jun 2021 • Dawei Zhou, Tongliang Liu, Bo Han, Nannan Wang, Chunlei Peng, Xinbo Gao
However, given the continuously evolving attacks, models trained on seen types of adversarial examples generally cannot generalize well to unseen types of adversarial examples.
no code implementations • 1 Jun 2021 • Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Jun Yu, Gang Niu, Masashi Sugiyama
Lots of approaches, e. g., loss correction and label correction, cannot handle such open-set noisy labels well, since they need training data and test data to share the same label space, which does not hold for learning with open-set noisy labels.
no code implementations • NeurIPS 2021 • Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Jun Yu, Gang Niu, Masashi Sugiyama
In this way, we also give large-loss but less selected data a try; then, we can better distinguish between the cases (a) and (b) by seeing if the losses effectively decrease with the uncertainty after the try.
Ranked #26 on Image Classification on mini WebVision 1.0
1 code implementation • 31 May 2021 • Jingfeng Zhang, Xilie Xu, Bo Han, Tongliang Liu, Gang Niu, Lizhen Cui, Masashi Sugiyama
First, we thoroughly investigate noisy labels (NLs) injection into AT's inner maximization and outer minimization, respectively and obtain the observations on when NL injection benefits AT.
no code implementations • 27 May 2021 • Shuo Yang, Erkun Yang, Bo Han, Yang Liu, Min Xu, Gang Niu, Tongliang Liu
Motivated by that classifiers mostly output Bayes optimal labels for prediction, in this paper, we study to directly model the transition from Bayes optimal labels to noisy labels (i. e., Bayes-label transition matrix (BLTM)) and learn a classifier to predict Bayes optimal labels.
no code implementations • 14 Apr 2021 • Jiangchao Yao, Feng Wang, Kunyang Jia, Bo Han, Jingren Zhou, Hongxia Yang
With the rapid development of storage and computing power on mobile devices, it becomes critical and popular to deploy models on devices to save onerous communication latencies and to capture real-time features.
no code implementations • 17 Mar 2021 • Qizhou Wang, Jiangchao Yao, Chen Gong, Tongliang Liu, Mingming Gong, Hongxia Yang, Bo Han
Most of the previous approaches in this area focus on the pairwise relation (casual or correlational relationship) with noise, such as learning with noisy labels.
1 code implementation • ICLR 2022 • Haoang Chi, Feng Liu, Bo Han, Wenjing Yang, Long Lan, Tongliang Liu, Gang Niu, Mingyuan Zhou, Masashi Sugiyama
In this paper, we demystify assumptions behind NCD and find that high-level semantic features should be shared among the seen and unseen classes.
no code implementations • 6 Feb 2021 • Jianing Zhu, Jingfeng Zhang, Bo Han, Tongliang Liu, Gang Niu, Hongxia Yang, Mohan Kankanhalli, Masashi Sugiyama
A recent adversarial training (AT) study showed that the number of projected gradient descent (PGD) steps to successfully attack a point (i. e., find an adversarial example in its proximity) is an effective measure of the robustness of this point.
1 code implementation • 4 Feb 2021 • Xuefeng Li, Tongliang Liu, Bo Han, Gang Niu, Masashi Sugiyama
In label-noise learning, the transition matrix plays a key role in building statistically consistent classifiers.
Ranked #14 on Learning with noisy labels on CIFAR-100N
1 code implementation • 3 Feb 2021 • Xuefeng Du, Jingfeng Zhang, Bo Han, Tongliang Liu, Yu Rong, Gang Niu, Junzhou Huang, Masashi Sugiyama
In adversarial training (AT), the main focus has been the objective and optimizer while the model has been less studied, so that the models being used are still those classic ones in standard training (ST).
2 code implementations • 14 Jan 2021 • Qizhou Wang, Bo Han, Tongliang Liu, Gang Niu, Jian Yang, Chen Gong
The drastic increase of data quantity often brings the severe decrease of data quality, such as incorrect label annotations, which poses a great challenge for robustly training Deep Neural Networks (DNNs).
no code implementations • ICLR 2021 • Xiaobo Xia, Tongliang Liu, Bo Han, Chen Gong, Nannan Wang, ZongYuan Ge, Yi Chang
The \textit{early stopping} method therefore can be exploited for learning with noisy labels.
Ranked #32 on Image Classification on mini WebVision 1.0 (ImageNet Top-1 Accuracy metric)
no code implementations • 1 Jan 2021 • Dawei Zhou, Tongliang Liu, Bo Han, Nannan Wang, Xinbo Gao
Motivated by this observation, we propose a defense framework ADD-Defense, which extracts the invariant information called \textit{perturbation-invariant representation} (PIR) to defend against widespread adversarial examples.
no code implementations • 2 Dec 2020 • Zhuowei Wang, Jing Jiang, Bo Han, Lei Feng, Bo An, Gang Niu, Guodong Long
We also instantiate our framework with different combinations, which set the new state of the art on benchmark-simulated and real-world datasets with noisy labels.
no code implementations • 2 Dec 2020 • Xiaobo Xia, Tongliang Liu, Bo Han, Nannan Wang, Jiankang Deng, Jiatong Li, Yinian Mao
The traditional transition matrix is limited to model closed-set label noise, where noisy training data has true class labels within the noisy label set.
1 code implementation • 9 Nov 2020 • Bo Han, Quanming Yao, Tongliang Liu, Gang Niu, Ivor W. Tsang, James T. Kwok, Masashi Sugiyama
Classical machine learning implicitly assumes that labels of the training data are sampled from a clean distribution, which can be too restrictive for real-world scenarios.
no code implementations • 6 Nov 2020 • Bingcong Li, Bo Han, Zhuowei Wang, Jing Jiang, Guodong Long
Specifically, our method maintains a dynamically updating confusion matrix, which analyzes confusable classes in the dataset.
2 code implementations • 22 Oct 2020 • Ruize Gao, Feng Liu, Jingfeng Zhang, Bo Han, Tongliang Liu, Gang Niu, Masashi Sugiyama
However, it has been shown that the MMD test is unaware of adversarial attacks -- the MMD test failed to detect the discrepancy between natural and adversarial data.
no code implementations • 5 Oct 2020 • Lei Feng, Senlin Shu, Nan Lu, Bo Han, Miao Xu, Gang Niu, Bo An, Masashi Sugiyama
To alleviate the data requirement for training effective binary classifiers in binary classification, many weakly supervised learning settings have been proposed.
2 code implementations • ICLR 2021 • Jingfeng Zhang, Jianing Zhu, Gang Niu, Bo Han, Masashi Sugiyama, Mohan Kankanhalli
The belief was challenged by recent studies where we can maintain the robustness and improve the accuracy.
no code implementations • 28 Sep 2020 • Songhua Wu, Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Nannan Wang, Haifeng Liu, Gang Niu
It is worthwhile to perform the transformation: We prove that the noise rate for the noisy similarity labels is lower than that of the noisy class labels, because similarity labels themselves are robust to noise.
no code implementations • NeurIPS 2020 • Lei Feng, Jiaqi Lv, Bo Han, Miao Xu, Gang Niu, Xin Geng, Bo An, Masashi Sugiyama
Partial-label learning (PLL) is a multi-class classification problem, where each training example is associated with a set of candidate labels.