Search Results for author: Tongliang Liu

Found 217 papers, 97 papers with code

Killing Two Birds with One Stone:Efficient and Robust Training of Face Recognition CNNs by Partial FC

4 code implementations28 Mar 2022 Xiang An, Jiankang Deng, Jia Guo, Ziyong Feng, Xuhan Zhu, Jing Yang, Tongliang Liu

In each iteration, positive class centers and a random subset of negative class centers are selected to compute the margin-based softmax loss.

Face Recognition Face Verification

Killing Two Birds With One Stone: Efficient and Robust Training of Face Recognition CNNs by Partial FC

1 code implementation CVPR 2022 Xiang An, Jiankang Deng, Jia Guo, Ziyong Feng, Xuhan Zhu, Jing Yang, Tongliang Liu

In each iteration, positive class centers and a random subset of negative class centers are selected to compute the margin-based softmax loss.

Face Recognition

Domain Generalization via Conditional Invariant Representation

1 code implementation23 Jul 2018 Ya Li, Mingming Gong, Xinmei Tian, Tongliang Liu, DaCheng Tao

With the conditional invariant representation, the invariance of the joint distribution $\mathbb{P}(h(X), Y)$ can be guaranteed if the class prior $\mathbb{P}(Y)$ does not change across training and test domains.

Domain Generalization

Unicom: Universal and Compact Representation Learning for Image Retrieval

2 code implementations12 Apr 2023 Xiang An, Jiankang Deng, Kaicheng Yang, Jaiwei Li, Ziyong Feng, Jia Guo, Jing Yang, Tongliang Liu

To further enhance the low-dimensional feature representation, we randomly select partial feature dimensions when calculating the similarities between embeddings and class-wise prototypes.

 Ranked #1 on Image Retrieval on SOP (using extra training data)

Image Retrieval Metric Learning +4

CRIS: CLIP-Driven Referring Image Segmentation

1 code implementation CVPR 2022 Zhaoqing Wang, Yu Lu, Qiang Li, Xunqiang Tao, Yandong Guo, Mingming Gong, Tongliang Liu

In addition, we present text-to-pixel contrastive learning to explicitly enforce the text feature similar to the related pixel-level features and dissimilar to the irrelevances.

Contrastive Learning Generalized Referring Expression Segmentation +3

DeepSolo: Let Transformer Decoder with Explicit Points Solo for Text Spotting

2 code implementations CVPR 2023 Maoyuan Ye, Jing Zhang, Shanshan Zhao, Juhua Liu, Tongliang Liu, Bo Du, DaCheng Tao

In this paper, we present DeepSolo, a simple DETR-like baseline that lets a single Decoder with Explicit Points Solo for text detection and recognition simultaneously.

 Ranked #1 on Text Spotting on Total-Text (using extra training data)

Scene Text Detection Text Detection +2

DeepSolo++: Let Transformer Decoder with Explicit Points Solo for Multilingual Text Spotting

2 code implementations31 May 2023 Maoyuan Ye, Jing Zhang, Shanshan Zhao, Juhua Liu, Tongliang Liu, Bo Du, DaCheng Tao

In this paper, we present DeepSolo++, a simple DETR-like baseline that lets a single decoder with explicit points solo for text detection, recognition, and script identification simultaneously.

Scene Text Detection Text Detection +1

Learning Causally Invariant Representations for Out-of-Distribution Generalization on Graphs

3 code implementations11 Feb 2022 Yongqiang Chen, Yonggang Zhang, Yatao Bian, Han Yang, Kaili Ma, Binghui Xie, Tongliang Liu, Bo Han, James Cheng

Despite recent success in using the invariance principle for out-of-distribution (OOD) generalization on Euclidean data (e. g., images), studies on graph data are still limited.

Drug Discovery Graph Learning +1

Learning with Noisy Labels Revisited: A Study Using Real-World Human Annotations

2 code implementations ICLR 2022 Jiaheng Wei, Zhaowei Zhu, Hao Cheng, Tongliang Liu, Gang Niu, Yang Liu

These observations require us to rethink the treatment of noisy labels, and we hope the availability of these two datasets would facilitate the development and evaluation of future learning with noisy label solutions.

Benchmarking Learning with noisy labels +1

Eliminating Catastrophic Overfitting Via Abnormal Adversarial Examples Regularization

2 code implementations NeurIPS 2023 Runqi Lin, Chaojian Yu, Tongliang Liu

Specifically, we design a novel method, termed Abnormal Adversarial Examples Regularization (AAER), which explicitly regularizes the variation of AAEs to hinder the classifier from becoming distorted.

Adversarial Robustness

Are Anchor Points Really Indispensable in Label-Noise Learning?

1 code implementation NeurIPS 2019 Xiaobo Xia, Tongliang Liu, Nannan Wang, Bo Han, Chen Gong, Gang Niu, Masashi Sugiyama

Existing theories have shown that the transition matrix can be learned by exploiting \textit{anchor points} (i. e., data points that belong to a specific class almost surely).

Learning with noisy labels

Graph Pooling for Graph Neural Networks: Progress, Challenges, and Opportunities

1 code implementation15 Apr 2022 Chuang Liu, Yibing Zhan, Jia Wu, Chang Li, Bo Du, Wenbin Hu, Tongliang Liu, DaCheng Tao

Graph neural networks have emerged as a leading architecture for many graph-level tasks, such as graph classification and graph generation.

Graph Classification Graph Generation

DeepInception: Hypnotize Large Language Model to Be Jailbreaker

1 code implementation6 Nov 2023 Xuan Li, Zhanke Zhou, Jianing Zhu, Jiangchao Yao, Tongliang Liu, Bo Han

Despite remarkable success in various applications, large language models (LLMs) are vulnerable to adversarial jailbreaks that make the safety guardrails void.

Language Modelling Large Language Model

Selective-Supervised Contrastive Learning with Noisy Labels

1 code implementation CVPR 2022 Shikun Li, Xiaobo Xia, Shiming Ge, Tongliang Liu

In the selection process, by measuring the agreement between learned representations and given labels, we first identify confident examples that are exploited to build confident pairs.

Contrastive Learning Learning with noisy labels +1

A Survey of Label-noise Representation Learning: Past, Present and Future

1 code implementation9 Nov 2020 Bo Han, Quanming Yao, Tongliang Liu, Gang Niu, Ivor W. Tsang, James T. Kwok, Masashi Sugiyama

Classical machine learning implicitly assumes that labels of the training data are sampled from a clean distribution, which can be too restrictive for real-world scenarios.

BIG-bench Machine Learning Learning Theory +1

ALIP: Adaptive Language-Image Pre-training with Synthetic Caption

1 code implementation ICCV 2023 Kaicheng Yang, Jiankang Deng, Xiang An, Jiawei Li, Ziyong Feng, Jia Guo, Jing Yang, Tongliang Liu

However, the presence of intrinsic noise and unmatched image-text pairs in web data can potentially affect the performance of representation learning.

Representation Learning Retrieval +1

Part-dependent Label Noise: Towards Instance-dependent Label Noise

1 code implementation NeurIPS 2020 Xiaobo Xia, Tongliang Liu, Bo Han, Nannan Wang, Mingming Gong, Haifeng Liu, Gang Niu, DaCheng Tao, Masashi Sugiyama

Learning with the \textit{instance-dependent} label noise is challenging, because it is hard to model such real-world noise.

Domain Generalization via Entropy Regularization

1 code implementation NeurIPS 2020 Shanshan Zhao, Mingming Gong, Tongliang Liu, Huan Fu, DaCheng Tao

To arrive at this, some methods introduce a domain discriminator through adversarial learning to match the feature distributions in multiple source domains.

Domain Generalization

A Second-Order Approach to Learning with Instance-Dependent Label Noise

1 code implementation CVPR 2021 Zhaowei Zhu, Tongliang Liu, Yang Liu

We first provide evidences that the heterogeneous instance-dependent label noise is effectively down-weighting the examples with higher noise rates in a non-uniform way and thus causes imbalances, rendering the strategy of directly applying methods for class-dependent label noise questionable.

Image Classification Image Classification with Label Noise

Learning with Biased Complementary Labels

1 code implementation ECCV 2018 Xiyu Yu, Tongliang Liu, Mingming Gong, DaCheng Tao

We therefore reason that the transition probabilities will be different.

Open-Vocabulary Segmentation with Unpaired Mask-Text Supervision

1 code implementation14 Feb 2024 Zhaoqing Wang, Xiaobo Xia, Ziye Chen, Xiao He, Yandong Guo, Mingming Gong, Tongliang Liu

With this unpaired mask-text supervision, we propose a new weakly-supervised open-vocabulary segmentation framework (Uni-OVSeg) that leverages confident pairs of mask predictions and entities in text descriptions.

Language Modelling

One Shot Learning as Instruction Data Prospector for Large Language Models

1 code implementation16 Dec 2023 Yunshui Li, Binyuan Hui, Xiaobo Xia, Jiaxi Yang, Min Yang, Lei Zhang, Shuzheng Si, Junhao Liu, Tongliang Liu, Fei Huang, Yongbin Li

Nuggets assesses the potential of individual instruction examples to act as effective one shot examples, thereby identifying those that can significantly enhance diverse task performance.

One-Shot Learning

Semantic Structure-based Unsupervised Deep Hashing

1 code implementation IJCAI2018 2018 Erkun Yang, Cheng Deng, Tongliang Liu, Wei Liu, DaCheng Tao

Hashing is becoming increasingly popular for approximate nearest neighbor searching in massive databases due to its storage and search efficiency.

Deep Hashing Semantic Similarity +1

Point-Query Quadtree for Crowd Counting, Localization, and More

1 code implementation ICCV 2023 Chengxin Liu, Hao Lu, Zhiguo Cao, Tongliang Liu

Such a querying process yields an intuitive, universal modeling of crowd as both the input and output are interpretable and steerable.

Crowd Counting

Understanding and Improving Graph Injection Attack by Promoting Unnoticeability

1 code implementation ICLR 2022 Yongqiang Chen, Han Yang, Yonggang Zhang, Kaili Ma, Tongliang Liu, Bo Han, James Cheng

Recently Graph Injection Attack (GIA) emerges as a practical attack scenario on Graph Neural Networks (GNNs), where the adversary can merely inject few malicious nodes instead of modifying existing nodes or edges, i. e., Graph Modification Attack (GMA).

Understanding and Improving Early Stopping for Learning with Noisy Labels

1 code implementation NeurIPS 2021 Yingbin Bai, Erkun Yang, Bo Han, Yanhua Yang, Jiatong Li, Yinian Mao, Gang Niu, Tongliang Liu

Instead of the early stopping, which trains a whole DNN all at once, we initially train former DNN layers by optimizing the DNN with a relatively large number of epochs.

Learning with noisy labels Memorization

How Well Does GPT-4V(ision) Adapt to Distribution Shifts? A Preliminary Investigation

1 code implementation12 Dec 2023 Zhongyi Han, Guanglin Zhou, Rundong He, Jindong Wang, Tailin Wu, Yilong Yin, Salman Khan, Lina Yao, Tongliang Liu, Kun Zhang

We further investigate its adaptability to controlled data perturbations and examine the efficacy of in-context learning as a tool to enhance its adaptation.

Anomaly Detection Autonomous Driving +6

Combating Exacerbated Heterogeneity for Robust Models in Federated Learning

1 code implementation1 Mar 2023 Jianing Zhu, Jiangchao Yao, Tongliang Liu, Quanming Yao, Jianliang Xu, Bo Han

Privacy and security concerns in real-world applications have led to the development of adversarially robust federated models.

Federated Learning

Reliable Adversarial Distillation with Unreliable Teachers

2 code implementations ICLR 2022 Jianing Zhu, Jiangchao Yao, Bo Han, Jingfeng Zhang, Tongliang Liu, Gang Niu, Jingren Zhou, Jianliang Xu, Hongxia Yang

However, when considering adversarial robustness, teachers may become unreliable and adversarial distillation may not work: teachers are pretrained on their own adversarial data, and it is too demanding to require that teachers are also good at every adversarial data queried by students.

Adversarial Robustness

SimT: Handling Open-set Noise for Domain Adaptive Semantic Segmentation

1 code implementation CVPR 2022 Xiaoqing Guo, Jie Liu, Tongliang Liu, Yixuan Yuan

By exploiting computational geometry analysis and properties of segmentation, we design three complementary regularizers, i. e. volume regularization, anchor guidance, convex guarantee, to approximate the true SimT.

Segmentation Semantic Segmentation

Out-of-distribution Detection with Implicit Outlier Transformation

1 code implementation9 Mar 2023 Qizhou Wang, Junjie Ye, Feng Liu, Quanyu Dai, Marcus Kalander, Tongliang Liu, Jianye Hao, Bo Han

It leads to a min-max learning scheme -- searching to synthesize OOD data that leads to worst judgments and learning from such OOD data for uniform performance in OOD detection.

Out-of-Distribution Detection

BiCro: Noisy Correspondence Rectification for Multi-modality Data via Bi-directional Cross-modal Similarity Consistency

1 code implementation CVPR 2023 Shuo Yang, Zhaopan Xu, Kai Wang, Yang You, Hongxun Yao, Tongliang Liu, Min Xu

As one of the most fundamental techniques in multimodal learning, cross-modal matching aims to project various sensory modalities into a shared feature space.

Image-text matching Text Matching

Confident Anchor-Induced Multi-Source Free Domain Adaptation

1 code implementation NeurIPS 2021 Jiahua Dong, Zhen Fang, Anjin Liu, Gan Sun, Tongliang Liu

To address these challenges, we develop a novel Confident-Anchor-induced multi-source-free Domain Adaptation (CAiDA) model, which is a pioneer exploration of knowledge adaptation from multiple source domains to the unlabeled target domain without any source data, but with only pre-trained source models.

Pseudo Label Source-Free Domain Adaptation +1

Maximum Mean Discrepancy Test is Aware of Adversarial Attacks

2 code implementations22 Oct 2020 Ruize Gao, Feng Liu, Jingfeng Zhang, Bo Han, Tongliang Liu, Gang Niu, Masashi Sugiyama

However, it has been shown that the MMD test is unaware of adversarial attacks -- the MMD test failed to detect the discrepancy between natural and adversarial data.

Adversarial Attack Detection

Bilateral Dependency Optimization: Defending Against Model-inversion Attacks

2 code implementations11 Jun 2022 Xiong Peng, Feng Liu, Jingfen Zhang, Long Lan, Junjie Ye, Tongliang Liu, Bo Han

To defend against MI attacks, previous work utilizes a unilateral dependency optimization strategy, i. e., minimizing the dependency between inputs (i. e., features) and outputs (i. e., labels) during training the classifier.

ERASE: Error-Resilient Representation Learning on Graphs for Label Noise Tolerance

1 code implementation13 Dec 2023 Ling-Hao Chen, Yuanshuo Zhang, Taohua Huang, Liangcai Su, Zeyi Lin, Xi Xiao, Xiaobo Xia, Tongliang Liu

To tackle this challenge and enhance the robustness of deep learning models against label noise in graph-based tasks, we propose a method called ERASE (Error-Resilient representation learning on graphs for lAbel noiSe tolerancE).

Denoising Node Classification +1

Meta Discovery: Learning to Discover Novel Classes given Very Limited Data

1 code implementation ICLR 2022 Haoang Chi, Feng Liu, Bo Han, Wenjing Yang, Long Lan, Tongliang Liu, Gang Niu, Mingyuan Zhou, Masashi Sugiyama

In this paper, we demystify assumptions behind NCD and find that high-level semantic features should be shared among the seen and unseen classes.

Clustering Meta-Learning +1

Watermarking for Out-of-distribution Detection

1 code implementation27 Oct 2022 Qizhou Wang, Feng Liu, Yonggang Zhang, Jing Zhang, Chen Gong, Tongliang Liu, Bo Han

Out-of-distribution (OOD) detection aims to identify OOD data based on representations extracted from well-trained deep models.

Out-of-Distribution Detection

Instance-dependent Label-noise Learning under a Structural Causal Model

2 code implementations NeurIPS 2021 Yu Yao, Tongliang Liu, Mingming Gong, Bo Han, Gang Niu, Kun Zhang

In particular, we show that properly modeling the instances will contribute to the identifiability of the label noise transition matrix and thus lead to a better classifier.

Continuous Dropout

1 code implementation28 Nov 2019 Xu Shen, Xinmei Tian, Tongliang Liu, Fang Xu, DaCheng Tao

On the one hand, continuous dropout is considerably closer to the activation characteristics of neurons in the human brain than traditional binary dropout.

Robust Weight Perturbation for Adversarial Training

1 code implementation30 May 2022 Chaojian Yu, Bo Han, Mingming Gong, Li Shen, Shiming Ge, Bo Du, Tongliang Liu

Based on these observations, we propose a robust perturbation strategy to constrain the extent of weight perturbation.

Classification

Machine Vision Therapy: Multimodal Large Language Models Can Enhance Visual Robustness via Denoising In-Context Learning

1 code implementation5 Dec 2023 Zhuo Huang, Chang Liu, Yinpeng Dong, Hang Su, Shibao Zheng, Tongliang Liu

Concretely, by estimating a transition matrix that captures the probability of one class being confused with another, an instruction containing a correct exemplar and an erroneous one from the most probable noisy class can be constructed.

Denoising In-Context Learning

Exploiting Class Activation Value for Partial-Label Learning

3 code implementations ICLR 2022 Fei Zhang, Lei Feng, Bo Han, Tongliang Liu, Gang Niu, Tao Qin, Masashi Sugiyama

As the first contribution, we empirically show that the class activation map (CAM), a simple technique for discriminating the learning patterns of each class in images, is surprisingly better at making accurate predictions than the model itself on selecting the true label from candidate labels.

Multi-class Classification Partial Label Learning

Understanding Robust Overfitting of Adversarial Training and Beyond

1 code implementation17 Jun 2022 Chaojian Yu, Bo Han, Li Shen, Jun Yu, Chen Gong, Mingming Gong, Tongliang Liu

Here, we explore the causes of robust overfitting by comparing the data distribution of \emph{non-overfit} (weak adversary) and \emph{overfitted} (strong adversary) adversarial training, and observe that the distribution of the adversarial data generated by weak adversary mainly contain small-loss data.

Adversarial Robustness Data Ablation

Dual T: Reducing Estimation Error for Transition Matrix in Label-noise Learning

1 code implementation NeurIPS 2020 Yu Yao, Tongliang Liu, Bo Han, Mingming Gong, Jiankang Deng, Gang Niu, Masashi Sugiyama

By this intermediate class, the original transition matrix can then be factorized into the product of two easy-to-estimate transition matrices.

Tackling Instance-Dependent Label Noise via a Universal Probabilistic Model

1 code implementation14 Jan 2021 Qizhou Wang, Bo Han, Tongliang Liu, Gang Niu, Jian Yang, Chen Gong

The drastic increase of data quantity often brings the severe decrease of data quality, such as incorrect label annotations, which poses a great challenge for robustly training Deep Neural Networks (DNNs).

Provably End-to-end Label-Noise Learning without Anchor Points

1 code implementation4 Feb 2021 Xuefeng Li, Tongliang Liu, Bo Han, Gang Niu, Masashi Sugiyama

In label-noise learning, the transition matrix plays a key role in building statistically consistent classifiers.

Learning with noisy labels

Probabilistic Margins for Instance Reweighting in Adversarial Training

1 code implementation NeurIPS 2021 Qizhou Wang, Feng Liu, Bo Han, Tongliang Liu, Chen Gong, Gang Niu, Mingyuan Zhou, Masashi Sugiyama

Reweighting adversarial data during training has been recently shown to improve adversarial robustness, where data closer to the current decision boundaries are regarded as more critical and given larger weights.

Adversarial Robustness

Multi-Label Noise Transition Matrix Estimation with Label Correlations: Theory and Algorithm

1 code implementation22 Sep 2023 Shikun Li, Xiaobo Xia, Hansong Zhang, Shiming Ge, Tongliang Liu

However, estimating multi-label noise transition matrices remains a challenging task, as most existing estimators in noisy multi-class learning rely on anchor points and accurate fitting of noisy class posteriors, which is hard to satisfy in noisy multi-label learning.

Multi-Label Learning

Orthogonal Deep Neural Networks

1 code implementation15 May 2019 Kui Jia, Shuai Li, Yuxin Wen, Tongliang Liu, DaCheng Tao

To this end, we first prove that DNNs are of local isometry on data distributions of practical interest; by using a new covering of the sample space and introducing the local isometry property of DNNs into generalization analysis, we establish a new generalization error bound that is both scale- and range-sensitive to singular value spectrum of each of networks' weight matrices.

Image Classification

NoiLIn: Improving Adversarial Training and Correcting Stereotype of Noisy Labels

1 code implementation31 May 2021 Jingfeng Zhang, Xilie Xu, Bo Han, Tongliang Liu, Gang Niu, Lizhen Cui, Masashi Sugiyama

First, we thoroughly investigate noisy labels (NLs) injection into AT's inner maximization and outer minimization, respectively and obtain the observations on when NL injection benefits AT.

Adversarial Robustness

FedDAG: Federated DAG Structure Learning

1 code implementation7 Dec 2021 Erdun Gao, Junjia Chen, Li Shen, Tongliang Liu, Mingming Gong, Howard Bondell

To date, most directed acyclic graphs (DAGs) structure learning approaches require data to be stored in a central server.

Causal Discovery

Dynamics-Aware Loss for Learning with Label Noise

1 code implementation21 Mar 2023 Xiu-Chuan Li, Xiaobo Xia, Fei Zhu, Tongliang Liu, Xu-Yao Zhang, Cheng-Lin Liu

Label noise poses a serious threat to deep neural networks (DNNs).

Robust Generalization against Photon-Limited Corruptions via Worst-Case Sharpness Minimization

2 code implementations CVPR 2023 Zhuo Huang, Miaoxi Zhu, Xiaobo Xia, Li Shen, Jun Yu, Chen Gong, Bo Han, Bo Du, Tongliang Liu

Experimentally, we simulate photon-limited corruptions using CIFAR10/100 and ImageNet30 datasets and show that SharpDRO exhibits a strong generalization ability against severe corruptions and exceeds well-known baseline methods with large performance gains.

Robust Angular Local Descriptor Learning

1 code implementation21 Jan 2019 Yanwu Xu, Mingming Gong, Tongliang Liu, Kayhan Batmanghelich, Chaohui Wang

In recent years, the learned local descriptors have outperformed handcrafted ones by a large margin, due to the powerful deep convolutional neural network architectures such as L2-Net [1] and triplet based metric learning [2].

Metric Learning

TOHAN: A One-step Approach towards Few-shot Hypothesis Adaptation

1 code implementation NeurIPS 2021 Haoang Chi, Feng Liu, Wenjing Yang, Long Lan, Tongliang Liu, Bo Han, William K. Cheung, James T. Kwok

To this end, we propose a target orientated hypothesis adaptation network (TOHAN) to solve the FHA problem, where we generate highly-compatible unlabeled data (i. e., an intermediate domain) to help train a target-domain classifier.

Domain Adaptation

Revisiting Knowledge Distillation: An Inheritance and Exploration Framework

1 code implementation CVPR 2021 Zhen Huang, Xu Shen, Jun Xing, Tongliang Liu, Xinmei Tian, Houqiang Li, Bing Deng, Jianqiang Huang, Xian-Sheng Hua

The inheritance part is learned with a similarity loss to transfer the existing learned knowledge from the teacher model to the student model, while the exploration part is encouraged to learn representations different from the inherited ones with a dis-similarity loss.

Knowledge Distillation

Regularly Truncated M-estimators for Learning with Noisy Labels

1 code implementation2 Sep 2023 Xiaobo Xia, Pengqian Lu, Chen Gong, Bo Han, Jun Yu, Tongliang Liu

However, such a procedure is arguably debatable from two folds: (a) it does not consider the bad influence of noisy labels in selected small-loss examples; (b) it does not make good use of the discarded large-loss examples, which may be clean or have meaningful information for generalization.

Learning with noisy labels

Robust Training of Federated Models with Extremely Label Deficiency

2 code implementations22 Feb 2024 Yonggang Zhang, Zhiqin Yang, Xinmei Tian, Nannan Wang, Tongliang Liu, Bo Han

Federated semi-supervised learning (FSSL) has emerged as a powerful paradigm for collaboratively training machine learning models using distributed data with label deficiency.

Correcting the Triplet Selection Bias for Triplet Loss

1 code implementation ECCV 2018 Baosheng Yu, Tongliang Liu, Mingming Gong, Changxing Ding, DaCheng Tao

Considering that the number of triplets grows cubically with the size of training data, triplet mining is thus indispensable for efficiently training with triplet loss.

Face Recognition Fine-Grained Image Classification +5

Modeling Adversarial Noise for Adversarial Training

1 code implementation21 Sep 2021 Dawei Zhou, Nannan Wang, Bo Han, Tongliang Liu

Deep neural networks have been demonstrated to be vulnerable to adversarial noise, promoting the development of defense against adversarial attacks.

Adversarial Defense

Enhancing One-Shot Federated Learning Through Data and Ensemble Co-Boosting

1 code implementation23 Feb 2024 Rong Dai, Yonggang Zhang, Ang Li, Tongliang Liu, Xun Yang, Bo Han

These hard samples are then employed to promote the quality of the ensemble model by adjusting the ensembling weights for each client model.

Federated Learning

Trustable Co-label Learning from Multiple Noisy Annotators

1 code implementation8 Mar 2022 Shikun Li, Tongliang Liu, Jiyong Tan, Dan Zeng, Shiming Ge

This raises the following important question: how can we effectively use a small amount of trusted data to facilitate robust classifier learning from multiple annotators?

Improving Adversarial Robustness via Mutual Information Estimation

1 code implementation25 Jul 2022 Dawei Zhou, Nannan Wang, Xinbo Gao, Bo Han, Xiaoyu Wang, Yibing Zhan, Tongliang Liu

To alleviate this negative effect, in this paper, we investigate the dependence between outputs of the target model and input adversarial samples from the perspective of information theory, and propose an adversarial defense method.

Adversarial Defense Adversarial Robustness +1

Towards Lightweight Black-Box Attacks against Deep Neural Networks

1 code implementation29 Sep 2022 Chenghao Sun, Yonggang Zhang, Wan Chaoqun, Qizhou Wang, Ya Li, Tongliang Liu, Bo Han, Xinmei Tian

As it is hard to mitigate the approximation error with few available samples, we propose Error TransFormer (ETF) for lightweight attacks.

Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection Capability

1 code implementation6 Jun 2023 Jianing Zhu, Hengzhuang Li, Jiangchao Yao, Tongliang Liu, Jianliang Xu, Bo Han

Based on such insights, we propose a novel method, Unleashing Mask, which aims to restore the OOD discriminative capabilities of the well-trained model with ID data.

Out-of-Distribution Detection

E2HQV: High-Quality Video Generation from Event Camera via Theory-Inspired Model-Aided Deep Learning

1 code implementation16 Jan 2024 Qiang Qu, Yiran Shen, Xiaoming Chen, Yuk Ying Chung, Tongliang Liu

In this work, we propose \textbf{E2HQV}, a novel E2V paradigm designed to produce high-quality video frames from events.

Video Generation

Adaptive Morphological Reconstruction for Seeded Image Segmentation

1 code implementation8 Apr 2019 Tao Lei, Xiaohong Jia, Tongliang Liu, Shigang Liu, Hongy-ing Meng, Asoke K. Nandi

However, MR might mistakenly filter meaningful seeds that are required for generating accurate segmentation and it is also sensitive to the scale because a single-scale structuring element is employed.

Image Segmentation Segmentation +1

Learning Diverse-Structured Networks for Adversarial Robustness

1 code implementation3 Feb 2021 Xuefeng Du, Jingfeng Zhang, Bo Han, Tongliang Liu, Yu Rong, Gang Niu, Junzhou Huang, Masashi Sugiyama

In adversarial training (AT), the main focus has been the objective and optimizer while the model has been less studied, so that the models being used are still those classic ones in standard training (ST).

Adversarial Robustness

Architecture, Dataset and Model-Scale Agnostic Data-free Meta-Learning

1 code implementation CVPR 2023 Zixuan Hu, Li Shen, Zhenyi Wang, Tongliang Liu, Chun Yuan, DaCheng Tao

The goal of data-free meta-learning is to learn useful prior knowledge from a collection of pre-trained models without accessing their training data.

Meta-Learning

Detecting Out-of-distribution Data through In-distribution Class Prior

1 code implementation ICML 2023 Xue Jiang, Feng Liu, Zhen Fang, Hong Chen, Tongliang Liu, Feng Zheng, Bo Han

In this paper, we show that this assumption makes the above methods incapable when the ID model is trained with class-imbalanced data. Fortunately, by analyzing the causal relations between ID/OOD classes and features, we identify several common scenarios where the OOD-to-ID probabilities should be the ID-class-prior distribution and propose two strategies to modify existing inference-time detection methods: 1) replace the uniform distribution with the ID-class-prior distribution if they explicitly use the uniform distribution; 2) otherwise, reweight their scores according to the similarity between the ID-class-prior distribution and the softmax outputs of the pre-trained model.

Out-of-Distribution Detection Out of Distribution (OOD) Detection

Out-of-distribution Detection Learning with Unreliable Out-of-distribution Sources

1 code implementation NeurIPS 2023 Haotian Zheng, Qizhou Wang, Zhen Fang, Xiaobo Xia, Feng Liu, Tongliang Liu, Bo Han

To this end, we suggest that generated data (with mistaken OOD generation) can be used to devise an auxiliary OOD detection task to facilitate real OOD detection.

Out-of-Distribution Detection Out of Distribution (OOD) Detection +1

Federated Causal Discovery from Heterogeneous Data

1 code implementation20 Feb 2024 Loka Li, Ignavier Ng, Gongxu Luo, Biwei Huang, Guangyi Chen, Tongliang Liu, Bin Gu, Kun Zhang

This discrepancy has motivated the development of federated causal discovery (FCD) approaches.

Causal Discovery

MissDAG: Causal Discovery in the Presence of Missing Data with Continuous Additive Noise Models

1 code implementation27 May 2022 Erdun Gao, Ignavier Ng, Mingming Gong, Li Shen, Wei Huang, Tongliang Liu, Kun Zhang, Howard Bondell

In this paper, we develop a general method, which we call MissDAG, to perform causal discovery from data with incomplete observations.

Causal Discovery Imputation +1

Transferring Annotator- and Instance-dependent Transition Matrix for Learning from Crowds

1 code implementation5 Jun 2023 Shikun Li, Xiaobo Xia, Jiankang Deng, Shiming Ge, Tongliang Liu

In real-world crowd-sourcing scenarios, noise transition matrices are both annotator- and instance-dependent.

Transfer Learning

PNT-Edge: Towards Robust Edge Detection with Noisy Labels by Learning Pixel-level Noise Transitions

1 code implementation26 Jul 2023 Wenjie Xuan, Shanshan Zhao, Yu Yao, Juhua Liu, Tongliang Liu, Yixin Chen, Bo Du, DaCheng Tao

Exploiting the estimated noise transitions, our model, named PNT-Edge, is able to fit the prediction to clean labels.

Edge Detection

BadLabel: A Robust Perspective on Evaluating and Enhancing Label-noise Learning

1 code implementation28 May 2023 Jingfeng Zhang, Bo Song, Haohan Wang, Bo Han, Tongliang Liu, Lei Liu, Masashi Sugiyama

To address the challenge posed by BadLabel, we further propose a robust LNL method that perturbs the labels in an adversarial manner at each epoch to make the loss values of clean and noisy labels again distinguishable.

Systematic Investigation of Sparse Perturbed Sharpness-Aware Minimization Optimizer

1 code implementation30 Jun 2023 Peng Mi, Li Shen, Tianhe Ren, Yiyi Zhou, Tianshuo Xu, Xiaoshuai Sun, Tongliang Liu, Rongrong Ji, DaCheng Tao

Sharpness-Aware Minimization (SAM) is a popular solution that smooths the loss landscape by minimizing the maximized change of training loss when adding a perturbation to the weight.

Noise-robust Graph Learning by Estimating and Leveraging Pairwise Interactions

1 code implementation14 Jun 2021 Xuefeng Du, Tian Bian, Yu Rong, Bo Han, Tongliang Liu, Tingyang Xu, Wenbing Huang, Yixuan Li, Junzhou Huang

This paper bridges the gap by proposing a pairwise framework for noisy node classification on graphs, which relies on the PI as a primary learning proxy in addition to the pointwise learning from the noisy node class labels.

Contrastive Learning Graph Learning +2

Harnessing Out-Of-Distribution Examples via Augmenting Content and Style

1 code implementation7 Jul 2022 Zhuo Huang, Xiaobo Xia, Li Shen, Bo Han, Mingming Gong, Chen Gong, Tongliang Liu

Machine learning models are vulnerable to Out-Of-Distribution (OOD) examples, and such a problem has drawn much attention.

Data Augmentation Disentanglement +3

Class-Dependent Label-Noise Learning with Cycle-Consistency Regularization Feature Space

1 code implementation NIPS 2022 De Cheng, Yixiong Ning, Nannan Wang, Xinbo Gao, Heng Yang, Yuxuan Du, Bo Han, Tongliang Liu

We show that the cycle-consistency regularization helps to minimize the volume of the transition matrix T indirectly without exploiting the estimated noisy class posterior, which could further encourage the estimated transition matrix T to converge to its optimal solution.

Exploring Model Dynamics for Accumulative Poisoning Discovery

1 code implementation6 Jun 2023 Jianing Zhu, Xiawei Guo, Jiangchao Yao, Chao Du, Li He, Shuo Yuan, Tongliang Liu, Liang Wang, Bo Han

In this paper, we dive into the perspective of model dynamics and propose a novel information measure, namely, Memorization Discrepancy, to explore the defense via the model-level information.

Memorization

On Exploring Node-feature and Graph-structure Diversities for Node Drop Graph Pooling

1 code implementation22 Jun 2023 Chuang Liu, Yibing Zhan, Baosheng Yu, Liu Liu, Bo Du, Wenbin Hu, Tongliang Liu

A pooling operation is essential for effective graph-level representation learning, where the node drop pooling has become one mainstream graph pooling technology.

Graph Classification Representation Learning

Deep Blur Mapping: Exploiting High-Level Semantics by Deep Neural Networks

no code implementations5 Dec 2016 Kede Ma, Huan Fu, Tongliang Liu, Zhou Wang, DaCheng Tao

The human visual system excels at detecting local blur of visual images, but the underlying mechanism is not well understood.

Vocal Bursts Intensity Prediction

An Information-Theoretic View for Deep Learning

no code implementations24 Apr 2018 Jingwei Zhang, Tongliang Liu, DaCheng Tao

This upper bound shows that as the number of convolutional and pooling layers $L$ increases in the network, the expected generalization error will decrease exponentially to zero.

speech-recognition Speech Recognition

On the Rates of Convergence from Surrogate Risk Minimizers to the Bayes Optimal Classifier

no code implementations11 Feb 2018 Jingwei Zhang, Tongliang Liu, DaCheng Tao

We study the rates of convergence from empirical surrogate risk minimizers to the Bayes optimal classifier.

Learning with Bounded Instance- and Label-dependent Label Noise

no code implementations ICML 2020 Jiacheng Cheng, Tongliang Liu, Kotagiri Ramamohanarao, DaCheng Tao

Inspired by the idea of learning with distilled examples, we then propose a learning algorithm with theoretical guarantees for its robustness to BILN.

Algorithmic stability and hypothesis complexity

no code implementations ICML 2017 Tongliang Liu, Gábor Lugosi, Gergely Neu, DaCheng Tao

The bounds are based on martingale inequalities in the Banach space to which the hypotheses belong.

Transfer Learning with Label Noise

no code implementations31 Jul 2017 Xiyu Yu, Tongliang Liu, Mingming Gong, Kun Zhang, Kayhan Batmanghelich, DaCheng Tao

However, when learning this invariant knowledge, existing methods assume that the labels in source domain are uncontaminated, while in reality, we often have access to source data with noisy labels.

Denoising Transfer Learning

Dimensionality-Dependent Generalization Bounds for $k$-Dimensional Coding Schemes

no code implementations3 Jan 2016 Tongliang Liu, DaCheng Tao, Dong Xu

Can we obtain dimensionality-dependent generalization bounds for $k$-dimensional coding schemes that are tighter than dimensionality-independent bounds when data is in a finite-dimensional feature space?

Clustering Dictionary Learning +2

Elastic Net Hypergraph Learning for Image Clustering and Semi-supervised Classification

no code implementations3 Mar 2016 Qingshan Liu, Yubao Sun, Cantian Wang, Tongliang Liu, DaCheng Tao

In the second step, hypergraph is used to represent the high order relationships between each datum and its prominent samples by regarding them as a hyperedge.

Clustering General Classification +3

Local Rademacher Complexity for Multi-label Learning

no code implementations26 Oct 2014 Chang Xu, Tongliang Liu, DaCheng Tao, Chao Xu

We analyze the local Rademacher complexity of empirical risk minimization (ERM)-based multi-label learning algorithms, and in doing so propose a new algorithm for multi-label learning.

Multi-Label Learning

Bayesian Quantum Circuit

no code implementations27 May 2018 Yuxuan Du, Tongliang Liu, DaCheng Tao

Parameterized quantum circuits (PQCs), as one of the most promising schemes to realize quantum machine learning algorithms on near-term quantum computers, have been designed to solve machine earning tasks with quantum advantages.

Quantum Physics

Instance-Dependent PU Learning by Bayesian Optimal Relabeling

no code implementations7 Aug 2018 Fengxiang He, Tongliang Liu, Geoffrey I. Webb, DaCheng Tao

Specifically, by treating the unlabelled data as noisy negative examples, we could automatically label a group positive and negative examples whose labels are identical to the ones assigned by a Bayesian optimal classifier with a consistency guarantee.

A Grover-search Based Quantum Learning Scheme for Classification

no code implementations17 Sep 2018 Yuxuan Du, Min-Hsiu Hsieh, Tongliang Liu, DaCheng Tao

Here we devise a Grover-search based quantum learning scheme (GBLS) to address the above two issues.

Classification Ensemble Learning

The Expressive Power of Parameterized Quantum Circuits

no code implementations29 Oct 2018 Yuxuan Du, Min-Hsiu Hsieh, Tongliang Liu, DaCheng Tao

Parameterized quantum circuits (PQCs) have been broadly used as a hybrid quantum-classical machine learning scheme to accomplish generative tasks.

Tensor Networks

An Optimal Transport View on Generalization

no code implementations8 Nov 2018 Jingwei Zhang, Tongliang Liu, DaCheng Tao

We derive upper bounds on the generalization error of learning algorithms based on their \emph{algorithmic transport cost}: the expected Wasserstein distance between the output hypothesis and the output hypothesis conditioned on an input example.

Learning Theory

An Efficient and Provable Approach for Mixture Proportion Estimation Using Linear Independence Assumption

no code implementations CVPR 2018 Xiyu Yu, Tongliang Liu, Mingming Gong, Kayhan Batmanghelich, DaCheng Tao

In this paper, we study the mixture proportion estimation (MPE) problem in a new setting: given samples from the mixture and the component distributions, we identify the proportions of the components in the mixture distribution.

Deep Domain Generalization via Conditional Invariant Adversarial Networks

no code implementations ECCV 2018 Ya Li, Xinmei Tian, Mingming Gong, Yajing Liu, Tongliang Liu, Kun Zhang, DaCheng Tao

Under the assumption that the conditional distribution $P(Y|X)$ remains unchanged across domains, earlier approaches to domain generalization learned the invariant representation $T(X)$ by minimizing the discrepancy of the marginal distribution $P(T(X))$.

Domain Generalization Representation Learning

On Compressing Deep Models by Low Rank and Sparse Decomposition

no code implementations CVPR 2017 Xiyu Yu, Tongliang Liu, Xinchao Wang, DaCheng Tao

Deep compression refers to removing the redundancy of parameters and feature maps for deep learning models.

Why ResNet Works? Residuals Generalize

no code implementations2 Apr 2019 Fengxiang He, Tongliang Liu, DaCheng Tao

This paper studies the influence of residual connections on the hypothesis complexity of the neural network in terms of the covering number of its hypothesis space.

On Better Exploring and Exploiting Task Relationships in Multi-Task Learning: Joint Model and Feature Learning

no code implementations3 Apr 2019 Ya Li, Xinmei Tian, Tongliang Liu, DaCheng Tao

The objective of our proposed method is to transform the features from different tasks into a common feature space in which the tasks are closely related and the shared parameters can be better optimized.

Multi-Task Learning

Generative-Discriminative Complementary Learning

no code implementations2 Apr 2019 Yanwu Xu, Mingming Gong, Junxiang Chen, Tongliang Liu, Kun Zhang, Kayhan Batmanghelich

The success of such approaches heavily depends on high-quality labeled instances, which are not easy to obtain, especially as the number of candidate classes increases.

A Regularization Approach for Instance-Based Superset Label Learning

no code implementations5 Apr 2019 Chen Gong, Tongliang Liu, Yuanyan Tang, Jian Yang, Jie Yang, DaCheng Tao

As a result, the intrinsic constraints among different candidate labels are deployed, and the disambiguated labels generated by RegISL are more discriminative and accurate than those output by existing instance-based algorithms.

Transferring Knowledge Fragments for Learning Distance Metric from A Heterogeneous Domain

no code implementations8 Apr 2019 Yong Luo, Yonggang Wen, Tongliang Liu, DaCheng Tao

Some existing heterogeneous transfer learning (HTL) approaches can learn target distance metric by usually transforming the samples of source and target domain into a common subspace.

Metric Learning Transfer Learning

Multi-View Matrix Completion for Multi-Label Image Classification

no code implementations8 Apr 2019 Yong Luo, Tongliang Liu, DaCheng Tao, Chao Xu

Therefore, we propose to weightedly combine the MC outputs of different views, and present the multi-view matrix completion (MVMC) framework for transductive multi-label image classification.

Classification General Classification +5

Decomposition-Based Transfer Distance Metric Learning for Image Classification

no code implementations8 Apr 2019 Yong Luo, Tongliang Liu, DaCheng Tao, Chao Xu

In particular, DTDML learns a sparse combination of the base metrics to construct the target metric by forcing the target metric to be close to an integration of the source metrics.

Classification General Classification +3

Fast Supervised Discrete Hashing

no code implementations7 Apr 2019 Jie Gui, Tongliang Liu, Zhenan Sun, DaCheng Tao, Tieniu Tan

Rather than adopting this method, FSDH uses a very simple yet effective regression of the class labels of training examples to the corresponding hash code to accelerate the algorithm.

regression

dipIQ: Blind Image Quality Assessment by Learning-to-Rank Discriminable Image Pairs

no code implementations13 Apr 2019 Kede Ma, Wentao Liu, Tongliang Liu, Zhou Wang, DaCheng Tao

One of the biggest challenges in learning BIQA models is the conflict between the gigantic image space (which is in the dimension of the number of image pixels) and the extremely limited reliable ground truth data for training.

Blind Image Quality Assessment Learning-To-Rank

DistillHash: Unsupervised Deep Hashing by Distilling Data Pairs

no code implementations CVPR 2019 Erkun Yang, Tongliang Liu, Cheng Deng, Wei Liu, DaCheng Tao

To address this issue, we propose a novel deep unsupervised hashing model, dubbed DistillHash, which can learn a distilled data set consisted of data pairs, which have confidence similarity signals.

Deep Hashing Semantic Similarity +1

Truncated Cauchy Non-negative Matrix Factorization

no code implementations2 Jun 2019 Naiyang Guan, Tongliang Liu, Yangmuzi Zhang, DaCheng Tao, Larry S. Davis

Non-negative matrix factorization (NMF) minimizes the Euclidean distance between the data matrix and its low rank approximation, and it fails when applied to corrupted data because the loss function is sensitive to outliers.

Clustering Image Clustering

A Quantum-inspired Algorithm for General Minimum Conical Hull Problems

no code implementations16 Jul 2019 Yuxuan Du, Min-Hsiu Hsieh, Tongliang Liu, DaCheng Tao

In this paper, we propose a sublinear classical algorithm to tackle general minimum conical hull problems when the input has stored in a sample-based low-overhead data structure.

Towards Digital Retina in Smart Cities: A Model Generation, Utilization and Communication Paradigm

1 code implementation31 Jul 2019 Yihang Lou, Ling-Yu Duan, Yong Luo, Ziqian Chen, Tongliang Liu, Shiqi Wang, Wen Gao

The digital retina in smart cities is to select what the City Eye tells the City Brain, and convert the acquired visual data from front-end visual sensors to features in an intelligent sensing manner.

Control Batch Size and Learning Rate to Generalize Well: Theoretical and Empirical Evidence

no code implementations NeurIPS 2019 Fengxiang He, Tongliang Liu, DaCheng Tao

Specifically, we prove a PAC-Bayes generalization bound for neural networks trained by SGD, which has a positive correlation with the ratio of batch size to learning rate.

Where is the Bottleneck of Adversarial Learning with Unlabeled Data?

no code implementations20 Nov 2019 Jingfeng Zhang, Bo Han, Gang Niu, Tongliang Liu, Masashi Sugiyama

Deep neural networks (DNNs) are incredibly brittle due to adversarial examples.

A Shape Transformation-based Dataset Augmentation Framework for Pedestrian Detection

no code implementations15 Dec 2019 Zhe Chen, Wanli Ouyang, Tongliang Liu, DaCheng Tao

Alternatively, to access much more natural-looking pedestrians, we propose to augment pedestrian detection datasets by transforming real pedestrians from the same dataset into different shapes.

Pedestrian Detection

Confidence Scores Make Instance-dependent Label-noise Learning Possible

no code implementations11 Jan 2020 Antonin Berthon, Bo Han, Gang Niu, Tongliang Liu, Masashi Sugiyama

We find with the help of confidence scores, the transition distribution of each instance can be approximately estimated.

Learning with noisy labels

Rethinking Class-Prior Estimation for Positive-Unlabeled Learning

no code implementations ICLR 2022 Yu Yao, Tongliang Liu, Bo Han, Mingming Gong, Gang Niu, Masashi Sugiyama, DaCheng Tao

Hitherto, the distributional-assumption-free CPE methods rely on a critical assumption that the support of the positive data distribution cannot be contained in the support of the negative data distribution.

valid

Multi-Class Classification from Noisy-Similarity-Labeled Data

no code implementations16 Feb 2020 Songhua Wu, Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Nannan Wang, Haifeng Liu, Gang Niu

We further estimate the transition matrix from only noisy data and build a novel learning system to learn a classifier which can assign noise-free class labels for instances.

Classification General Classification +1

Quantum noise protects quantum classifiers against adversaries

no code implementations20 Mar 2020 Yuxuan Du, Min-Hsiu Hsieh, Tongliang Liu, DaCheng Tao, Nana Liu

This robustness property is intimately connected with an important security concept called differential privacy which can be extended to quantum differential privacy.

Classification General Classification

Repulsive Mixture Models of Exponential Family PCA for Clustering

no code implementations7 Apr 2020 Maoying Qiao, Tongliang Liu, Jun Yu, Wei Bian, DaCheng Tao

To alleviate this problem, in this paper, a repulsiveness-encouraging prior is introduced among mixing components and a diversified EPCA mixture (DEPCAM) model is developed in the Bayesian framework.

Clustering

Class2Simi: A Noise Reduction Perspective on Learning with Noisy Labels

no code implementations14 Jun 2020 Songhua Wu, Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Nannan Wang, Haifeng Liu, Gang Niu

To give an affirmative answer, in this paper, we propose a framework called Class2Simi: it transforms data points with noisy class labels to data pairs with noisy similarity labels, where a similarity label denotes whether a pair shares the class label or not.

Contrastive Learning Learning with noisy labels +1

Quantum Differentially Private Sparse Regression Learning

no code implementations23 Jul 2020 Yuxuan Du, Min-Hsiu Hsieh, Tongliang Liu, Shan You, DaCheng Tao

The eligibility of various advanced quantum algorithms will be questioned if they can not guarantee privacy.

BIG-bench Machine Learning regression

LTF: A Label Transformation Framework for Correcting Label Shift

no code implementations ICML 2020 Jiaxian Guo, Mingming Gong, Tongliang Liu, Kun Zhang, DaCheng Tao

Distribution shift is a major obstacle to the deployment of current deep learning models on real-world problems.

Label-Noise Robust Domain Adaptation

no code implementations ICML 2020 Xiyu Yu, Tongliang Liu, Mingming Gong, Kun Zhang, Kayhan Batmanghelich, DaCheng Tao

Domain adaptation aims to correct the classifiers when faced with distribution shift between source (training) and target (test) domains.

Denoising Domain Adaptation

Dual-Path Distillation: A Unified Framework to Improve Black-Box Attacks

no code implementations ICML 2020 Yonggang Zhang, Ya Li, Tongliang Liu, Xinmei Tian

To obtain sufficient knowledge for crafting adversarial examples, previous methods query the target model with inputs that are perturbed with different searching directions.

ADD-Defense: Towards Defending Widespread Adversarial Examples via Perturbation-Invariant Representation

no code implementations1 Jan 2021 Dawei Zhou, Tongliang Liu, Bo Han, Nannan Wang, Xinbo Gao

Motivated by this observation, we propose a defense framework ADD-Defense, which extracts the invariant information called \textit{perturbation-invariant representation} (PIR) to defend against widespread adversarial examples.

Improving robustness of softmax corss-entropy loss via inference information

no code implementations1 Jan 2021 Bingbing Song, wei he, Renyang Liu, Shui Yu, Ruxin Wang, Mingming Gong, Tongliang Liu, Wei Zhou

Several state-of-the-arts start from improving the inter-class separability of training samples by modifying loss functions, where we argue that the adversarial samples are ignored and thus limited robustness to adversarial attacks is resulted.

ME-MOMENTUM: EXTRACTING HARD CONFIDENT EXAMPLES FROM NOISILY LABELED DATA

no code implementations ICCV 2021 Yingbin Bai, Tongliang Liu

To extract hard confident examples that contain non-simple patterns and are entangled with the inaccurately labeled examples, we borrow the idea of momentum from physics.

Learning with noisy labels Memorization

Score-based Causal Discovery from Heterogeneous Data

no code implementations1 Jan 2021 Chenwei Ding, Biwei Huang, Mingming Gong, Kun Zhang, Tongliang Liu, DaCheng Tao

Most algorithms in causal discovery consider a single domain with a fixed distribution.

Causal Discovery

Extended T: Learning with Mixed Closed-set and Open-set Noisy Labels

no code implementations2 Dec 2020 Xiaobo Xia, Tongliang Liu, Bo Han, Nannan Wang, Jiankang Deng, Jiatong Li, Yinian Mao

The traditional transition matrix is limited to model closed-set label noise, where noisy training data has true class labels within the noisy label set.

COVID-MTL: Multitask Learning with Shift3D and Random-weighted Loss for Automated Diagnosis and Severity Assessment of COVID-19

no code implementations10 Dec 2020 Guoqing Bao, Huai Chen, Tongliang Liu, Guanzhong Gong, Yong Yin, Lisheng Wang, Xiuying Wang

In this paper, we present an end-to-end multitask learning (MTL) framework (COVID-MTL) that is capable of automated and simultaneous detection (against both radiology and NAT) and severity assessment of COVID-19.

COVID-19 Diagnosis Transfer Learning

Understanding the Interaction of Adversarial Training with Noisy Labels

no code implementations6 Feb 2021 Jianing Zhu, Jingfeng Zhang, Bo Han, Tongliang Liu, Gang Niu, Hongxia Yang, Mohan Kankanhalli, Masashi Sugiyama

A recent adversarial training (AT) study showed that the number of projected gradient descent (PGD) steps to successfully attack a point (i. e., find an adversarial example in its proximity) is an effective measure of the robustness of this point.

Improving Medical Image Classification with Label Noise Using Dual-uncertainty Estimation

no code implementations28 Feb 2021 Lie Ju, Xin Wang, Lin Wang, Dwarikanath Mahapatra, Xin Zhao, Mehrtash Harandi, Tom Drummond, Tongliang Liu, ZongYuan Ge

In this paper, we systematically discuss and define the two common types of label noise in medical images - disagreement label noise from inconsistency expert opinions and single-target label noise from wrong diagnosis record.

Benchmarking General Classification +3

Learning with Group Noise

no code implementations17 Mar 2021 Qizhou Wang, Jiangchao Yao, Chen Gong, Tongliang Liu, Mingming Gong, Hongxia Yang, Bo Han

Most of the previous approaches in this area focus on the pairwise relation (casual or correlational relationship) with noise, such as learning with noisy labels.

Learning with noisy labels Relation

Removing Adversarial Noise in Class Activation Feature Space

no code implementations ICCV 2021 Dawei Zhou, Nannan Wang, Chunlei Peng, Xinbo Gao, Xiaoyu Wang, Jun Yu, Tongliang Liu

Then, we train a denoising model to minimize the distances between the adversarial examples and the natural examples in the class activation feature space.

Adversarial Robustness Denoising

Relational Subsets Knowledge Distillation for Long-tailed Retinal Diseases Recognition

no code implementations22 Apr 2021 Lie Ju, Xin Wang, Lin Wang, Tongliang Liu, Xin Zhao, Tom Drummond, Dwarikanath Mahapatra, ZongYuan Ge

For example, there are estimated more than 40 different kinds of retinal diseases with variable morbidity, however with more than 30+ conditions are very rare from the global patient cohorts, which results in a typical long-tailed learning problem for deep learning-based screening models.

Knowledge Distillation

Estimating Instance-dependent Bayes-label Transition Matrix using a Deep Neural Network

no code implementations27 May 2021 Shuo Yang, Erkun Yang, Bo Han, Yang Liu, Min Xu, Gang Niu, Tongliang Liu

Motivated by that classifiers mostly output Bayes optimal labels for prediction, in this paper, we study to directly model the transition from Bayes optimal labels to noisy labels (i. e., Bayes-label transition matrix (BLTM)) and learn a classifier to predict Bayes optimal labels.

Instance Correction for Learning with Open-set Noisy Labels

no code implementations1 Jun 2021 Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Jun Yu, Gang Niu, Masashi Sugiyama

Lots of approaches, e. g., loss correction and label correction, cannot handle such open-set noisy labels well, since they need training data and test data to share the same label space, which does not hold for learning with open-set noisy labels.

Sample Selection with Uncertainty of Losses for Learning with Noisy Labels

no code implementations NeurIPS 2021 Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Jun Yu, Gang Niu, Masashi Sugiyama

In this way, we also give large-loss but less selected data a try; then, we can better distinguish between the cases (a) and (b) by seeing if the losses effectively decrease with the uncertainty after the try.

Learning with noisy labels

Towards Defending against Adversarial Examples via Attack-Invariant Features

no code implementations9 Jun 2021 Dawei Zhou, Tongliang Liu, Bo Han, Nannan Wang, Chunlei Peng, Xinbo Gao

However, given the continuously evolving attacks, models trained on seen types of adversarial examples generally cannot generalize well to unseen types of adversarial examples.

Adversarial Robustness

Improving White-box Robustness of Pre-processing Defenses via Joint Adversarial Training

no code implementations10 Jun 2021 Dawei Zhou, Nannan Wang, Xinbo Gao, Bo Han, Jun Yu, Xiaoyu Wang, Tongliang Liu

However, pre-processing methods may suffer from the robustness degradation effect, in which the defense reduces rather than improving the adversarial robustness of a target model in a white-box setting.

Adversarial Defense Adversarial Robustness

KRADA: Known-region-aware Domain Alignment for Open-set Domain Adaptation in Semantic Segmentation

1 code implementation11 Jun 2021 Chenhong Zhou, Feng Liu, Chen Gong, Rongfei Zeng, Tongliang Liu, William K. Cheung, Bo Han

However, in an open world, the unlabeled test images probably contain unknown categories and have different distributions from the labeled images.

Domain Adaptation Segmentation +1

Kernel Mean Estimation by Marginalized Corrupted Distributions

no code implementations10 Jul 2021 Xiaobo Xia, Shuo Shan, Mingming Gong, Nannan Wang, Fei Gao, Haikun Wei, Tongliang Liu

Estimating the kernel mean in a reproducing kernel Hilbert space is a critical component in many kernel learning algorithms.

Exploring Set Similarity for Dense Self-supervised Representation Learning

no code implementations CVPR 2022 Zhaoqing Wang, Qiang Li, Guoxin Zhang, Pengfei Wan, Wen Zheng, Nannan Wang, Mingming Gong, Tongliang Liu

By considering the spatial correspondence, dense self-supervised representation learning has achieved superior performance on various dense prediction tasks.

Instance Segmentation Keypoint Detection +5

Can Label-Noise Transition Matrix Help to Improve Sample Selection and Label Correction?

no code implementations29 Sep 2021 Yu Yao, Xuefeng Li, Tongliang Liu, Alan Blair, Mingming Gong, Bo Han, Gang Niu, Masashi Sugiyama

Existing methods for learning with noisy labels can be generally divided into two categories: (1) sample selection and label correction based on the memorization effect of neural networks; (2) loss correction with the transition matrix.

Learning with noisy labels Memorization

Understanding Generalized Label Smoothing when Learning with Noisy Labels

no code implementations29 Sep 2021 Jiaheng Wei, Hangyu Liu, Tongliang Liu, Gang Niu, Yang Liu

It was shown that LS serves as a regularizer for training data with hard labels and therefore improves the generalization of the model.

Learning with noisy labels

Modeling Adversarial Noise for Adversarial Defense

no code implementations29 Sep 2021 Dawei Zhou, Nannan Wang, Bo Han, Tongliang Liu

Deep neural networks have been demonstrated to be vulnerable to adversarial noise, promoting the development of defense against adversarial attacks.

Adversarial Defense

$\alpha$-Weighted Federated Adversarial Training

no code implementations29 Sep 2021 Jianing Zhu, Jiangchao Yao, Tongliang Liu, Kunyang Jia, Jingren Zhou, Bo Han, Hongxia Yang

Federated Adversarial Training (FAT) helps us address the data privacy and governance issues, meanwhile maintains the model robustness to the adversarial attack.

Adversarial Attack Federated Learning

Unleash the Potential of Adaptation Models via Dynamic Domain Labels

no code implementations29 Sep 2021 Xin Jin, Tianyu He, Xu Shen, Songhua Wu, Tongliang Liu, Xinchao Wang, Jianqiang Huang, Zhibo Chen, Xian-Sheng Hua

In this paper, we propose an embarrassing simple yet highly effective adversarial domain adaptation (ADA) method for effectively training models for alignment.

Domain Adaptation Memorization

PI-GNN: Towards Robust Semi-Supervised Node Classification against Noisy Labels

no code implementations29 Sep 2021 Xuefeng Du, Tian Bian, Yu Rong, Bo Han, Tongliang Liu, Tingyang Xu, Wenbing Huang, Junzhou Huang

Semi-supervised node classification on graphs is a fundamental problem in graph mining that uses a small set of labeled nodes and many unlabeled nodes for training, so that its performance is quite sensitive to the quality of the node labels.

Graph Mining Node Classification

Co-variance: Tackling Noisy Labels with Sample Selection by Emphasizing High-variance Examples

no code implementations29 Sep 2021 Xiaobo Xia, Bo Han, Yibing Zhan, Jun Yu, Mingming Gong, Chen Gong, Tongliang Liu

The sample selection approach is popular in learning with noisy labels, which tends to select potentially clean data out of noisy data for robust training.

Learning with noisy labels

MANDERA: Malicious Node Detection in Federated Learning via Ranking

no code implementations22 Oct 2021 Wanchuang Zhu, Benjamin Zi Hao Zhao, Simon Luo, Tongliang Liu, Ke Deng

Although we know that the benign gradients and Byzantine attacked gradients are distributed differently, to detect the malicious gradients is challenging due to (1) the gradient is high-dimensional and each dimension has its unique distribution and (2) the benign gradients and the attacked gradients are always mixed (two-sample test methods cannot apply directly).

Federated Learning

Meta Clustering Learning for Large-scale Unsupervised Person Re-identification

no code implementations19 Nov 2021 Xin Jin, Tianyu He, Xu Shen, Tongliang Liu, Xinchao Wang, Jianqiang Huang, Zhibo Chen, Xian-Sheng Hua

Unsupervised Person Re-identification (U-ReID) with pseudo labeling recently reaches a competitive performance compared to fully-supervised ReID methods based on modern clustering algorithms.

Clustering Unsupervised Person Re-Identification

Class2Simi: A New Perspective on Learning with Label Noise

no code implementations28 Sep 2020 Songhua Wu, Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Nannan Wang, Haifeng Liu, Gang Niu

It is worthwhile to perform the transformation: We prove that the noise rate for the noisy similarity labels is lower than that of the noisy class labels, because similarity labels themselves are robust to noise.

Transfer Learning in Conversational Analysis through Reusing Preprocessing Data as Supervisors

no code implementations2 Dec 2021 Joshua Yee Kim, Tongliang Liu, Kalina Yacef

Conversational analysis systems are trained using noisy human labels and often require heavy preprocessing during multi-modal feature extraction.

Feature Engineering Multi-Task Learning

Pluralistic Image Completion with Probabilistic Mixture-of-Experts

no code implementations18 May 2022 Xiaobo Xia, Wenhao Yang, Jie Ren, Yewen Li, Yibing Zhan, Bo Han, Tongliang Liu

Second, the constraints for diversity are designed to be task-agnostic, which causes the constraints to not work well.

Counterfactual Fairness with Partially Known Causal Graph

no code implementations27 May 2022 Aoqi Zuo, Susan Wei, Tongliang Liu, Bo Han, Kun Zhang, Mingming Gong

Interestingly, we find that counterfactual fairness can be achieved as if the true causal graph were fully known, when specific background knowledge is provided: the sensitive attributes do not have ancestors in the causal graph.

BIG-bench Machine Learning Causal Inference +2

Mutual Quantization for Cross-Modal Search With Noisy Labels

no code implementations CVPR 2022 Erkun Yang, Dongren Yao, Tongliang Liu, Cheng Deng

More specifically, we propose a proxy-based contrastive (PC) loss to mitigate the gap between different modalities and train networks for different modalities jointly with small-loss samples that are selected with the PC loss and a mutual quantization loss.

Quantization

Recent Advances for Quantum Neural Networks in Generative Learning

no code implementations7 Jun 2022 Jinkai Tian, Xiaoyu Sun, Yuxuan Du, Shanshan Zhao, Qing Liu, Kaining Zhang, Wei Yi, Wanrong Huang, Chaoyue Wang, Xingyao Wu, Min-Hsiu Hsieh, Tongliang Liu, Wenjing Yang, DaCheng Tao

Due to the intrinsic probabilistic nature of quantum mechanics, it is reasonable to postulate that quantum generative learning models (QGLMs) may surpass their classical counterparts.

BIG-bench Machine Learning Quantum Machine Learning

Instance-Dependent Label-Noise Learning with Manifold-Regularized Transition Matrix Estimation

no code implementations CVPR 2022 De Cheng, Tongliang Liu, Yixiong Ning, Nannan Wang, Bo Han, Gang Niu, Xinbo Gao, Masashi Sugiyama

In label-noise learning, estimating the transition matrix has attracted more and more attention as the matrix plays an important role in building statistically consistent classifiers.

Multi-scale Cooperative Multimodal Transformers for Multimodal Sentiment Analysis in Videos

no code implementations16 Jun 2022 Lianyang Ma, Yu Yao, Tao Liang, Tongliang Liu

On the whole, the "multi-scale" mechanism is capable of exploiting the different levels of semantic information of each modality which are used for fine-grained crossmodal interactions.

Multimodal Sentiment Analysis

Symmetric Pruning in Quantum Neural Networks

no code implementations30 Aug 2022 Xinbiao Wang, Junyu Liu, Tongliang Liu, Yong Luo, Yuxuan Du, DaCheng Tao

To fill this knowledge gap, here we propose the effective quantum neural tangent kernel (EQNTK) and connect this concept with over-parameterization theory to quantify the convergence of QNNs towards the global optima.

Strength-Adaptive Adversarial Training

no code implementations4 Oct 2022 Chaojian Yu, Dawei Zhou, Li Shen, Jun Yu, Bo Han, Mingming Gong, Nannan Wang, Tongliang Liu

Firstly, applying a pre-specified perturbation budget on networks of various model capacities will yield divergent degree of robustness disparity between natural and robust accuracies, which deviates from robust network's desideratum.

Adversarial Robustness Scheduling

Identifiability and Asymptotics in Learning Homogeneous Linear ODE Systems from Discrete Observations

no code implementations12 Oct 2022 Yuanyuan Wang, Wei Huang, Mingming Gong, Xi Geng, Tongliang Liu, Kun Zhang, DaCheng Tao

This paper derives a sufficient condition for the identifiability of homogeneous linear ODE systems from a sequence of equally-spaced error-free observations sampled from a single trajectory.

Exploit CAM by itself: Complementary Learning System for Weakly Supervised Semantic Segmentation

no code implementations4 Mar 2023 Jiren Mai, Fei Zhang, Junjie Ye, Marcus Kalander, Xian Zhang, Wankou Yang, Tongliang Liu, Bo Han

Motivated by this simple but effective learning pattern, we propose a General-Specific Learning Mechanism (GSLM) to explicitly drive a coarse-grained CAM to a fine-grained pseudo mask.

General Knowledge Hippocampus +2

Fairness Improves Learning from Noisily Labeled Long-Tailed Data

no code implementations22 Mar 2023 Jiaheng Wei, Zhaowei Zhu, Gang Niu, Tongliang Liu, Sijia Liu, Masashi Sugiyama, Yang Liu

Both long-tailed and noisily labeled data frequently appear in real-world applications and impose significant challenges for learning.

Fairness

Multiscale Representation for Real-Time Anti-Aliasing Neural Rendering

no code implementations ICCV 2023 Dongting Hu, Zhenkai Zhang, Tingbo Hou, Tongliang Liu, Huan Fu, Mingming Gong

Our approach includes a density Mip-VoG for scene geometry and a feature Mip-VoG with a small MLP for view-dependent color.

Neural Rendering

Learning Differentially Private Probabilistic Models for Privacy-Preserving Image Generation

no code implementations18 May 2023 Bochao Liu, Shiming Ge, Pengju Wang, Liansheng Zhuang, Tongliang Liu

In particular, we first train a model to fit the distribution of the training data and make it satisfy differential privacy by performing a randomized response mechanism during training process.

Image Generation Privacy Preserving

Advancing Counterfactual Inference through Nonlinear Quantile Regression

no code implementations9 Jun 2023 Shaoan Xie, Biwei Huang, Bin Gu, Tongliang Liu, Kun Zhang

Traditional counterfactual inference, under Pearls' counterfactual framework, typically depends on having access to or estimating a structural causal model.

counterfactual Counterfactual Inference +2

Evolving Semantic Prototype Improves Generative Zero-Shot Learning

no code implementations12 Jun 2023 Shiming Chen, Wenjin Hou, Ziming Hong, Xiaohan Ding, Yibing Song, Xinge You, Tongliang Liu, Kun Zhang

After alignment, synthesized sample features from unseen classes are closer to the real sample features and benefit DSP to improve existing generative ZSL methods by 8. 5\%, 8. 0\%, and 9. 7\% on the standard CUB, SUN AWA2 datasets, the significant performance improvement indicates that evolving semantic prototype explores a virgin field in ZSL.

Zero-Shot Learning

Making Binary Classification from Multiple Unlabeled Datasets Almost Free of Supervision

no code implementations12 Jun 2023 Yuhao Wu, Xiaobo Xia, Jun Yu, Bo Han, Gang Niu, Masashi Sugiyama, Tongliang Liu

Training a classifier exploiting a huge amount of supervised data is expensive or even prohibited in a situation, where the labeling cost is high.

Binary Classification Pseudo Label

A Universal Unbiased Method for Classification from Aggregate Observations

no code implementations20 Jun 2023 Zixi Wei, Lei Feng, Bo Han, Tongliang Liu, Gang Niu, Xiaofeng Zhu, Heng Tao Shen

This motivates the study on classification from aggregate observations (CFAO), where the supervision is provided to groups of instances, instead of individual instances.

Classification Multiple Instance Learning

Why do CNNs excel at feature extraction? A mathematical explanation

no code implementations3 Jul 2023 Vinoth Nandakumar, Arush Tagade, Tongliang Liu

Over the past decade deep learning has revolutionized the field of computer vision, with convolutional neural network models proving to be very effective for image classification benchmarks.

Classification Image Classification

Unleashing the Potential of Regularization Strategies in Learning with Noisy Labels

no code implementations11 Jul 2023 Hui Kang, Sheng Liu, Huaxi Huang, Jun Yu, Bo Han, Dadong Wang, Tongliang Liu

In recent years, research on learning with noisy labels has focused on devising novel algorithms that can achieve robustness to noisy training labels while generalizing to clean data.

Learning with noisy labels

Diversity-enhancing Generative Network for Few-shot Hypothesis Adaptation

no code implementations12 Jul 2023 Ruijiang Dong, Feng Liu, Haoang Chi, Tongliang Liu, Mingming Gong, Gang Niu, Masashi Sugiyama, Bo Han

In this paper, we propose a diversity-enhancing generative network (DEG-Net) for the FHA problem, which can generate diverse unlabeled data with the help of a kernel independence measure: the Hilbert-Schmidt independence criterion (HSIC).

ShadowNet for Data-Centric Quantum System Learning

no code implementations22 Aug 2023 Yuxuan Du, Yibo Yang, Tongliang Liu, Zhouchen Lin, Bernard Ghanem, DaCheng Tao

Understanding the dynamics of large quantum systems is hindered by the curse of dimensionality.

Quantum State Tomography

Late Stopping: Avoiding Confidently Learning from Mislabeled Examples

no code implementations ICCV 2023 Suqin Yuan, Lei Feng, Tongliang Liu

Sample selection is a prevalent method in learning with noisy labels, where small-loss data are typically considered as correctly labeled data.

Learning with noisy labels

Continual Learning From a Stream of APIs

no code implementations31 Aug 2023 Enneng Yang, Zhenyi Wang, Li Shen, Nan Yin, Tongliang Liu, Guibing Guo, Xingwei Wang, DaCheng Tao

Next, we train the CL model by minimizing the gap between the responses of the CL model and the black-box API on synthetic data, to transfer the API's knowledge to the CL model.

Continual Learning

Gradient constrained sharpness-aware prompt learning for vision-language models

no code implementations14 Sep 2023 Liangchen Liu, Nannan Wang, Dawei Zhou, Xinbo Gao, Decheng Liu, Xi Yang, Tongliang Liu

This paper targets a novel trade-off problem in generalizable prompt learning for vision-language models (VLM), i. e., improving the performance on unseen classes while maintaining the performance on seen classes.

Holistic Label Correction for Noisy Multi-Label Classification

no code implementations ICCV 2023 Xiaobo Xia, Jiankang Deng, Wei Bao, Yuxuan Du, Bo Han, Shiguang Shan, Tongliang Liu

The issues are, that we do not understand why label dependence is helpful in the problem, and how to learn and utilize label dependence only using training data with noisy multiple labels.

Classification Memorization +1

Combating Noisy Labels with Sample Selection by Mining High-Discrepancy Examples

no code implementations ICCV 2023 Xiaobo Xia, Bo Han, Yibing Zhan, Jun Yu, Mingming Gong, Chen Gong, Tongliang Liu

As selected data have high discrepancies in probabilities, the divergence of two networks can be maintained by training on such data.

Learning with noisy labels

Cannot find the paper you are looking for? You can Submit a new open access paper.