no code implementations • ICML 2020 • Jiaxian Guo, Mingming Gong, Tongliang Liu, Kun Zhang, DaCheng Tao
Distribution shift is a major obstacle to the deployment of current deep learning models on real-world problems.
no code implementations • ICML 2020 • Xiyu Yu, Tongliang Liu, Mingming Gong, Kun Zhang, Kayhan Batmanghelich, DaCheng Tao
Domain adaptation aims to correct the classifiers when faced with distribution shift between source (training) and target (test) domains.
no code implementations • ICML 2020 • Yonggang Zhang, Ya Li, Tongliang Liu, Xinmei Tian
To obtain sufficient knowledge for crafting adversarial examples, previous methods query the target model with inputs that are perturbed with different searching directions.
2 code implementations • ECCV 2020 • Jiankang Deng, Jia Guo, Tongliang Liu, Mingming Gong, Stefanos Zafeiriou
In this paper, we relax the intra-class constraint of ArcFace to improve the robustness to label noise.
no code implementations • 18 May 2022 • Xiaobo Xia, Wenhao Yang, Jie Ren, Yewen Li, Yibing Zhan, Bo Han, Tongliang Liu
Second, the constraints for diversity are designed to be task-agnostic, which causes the constraints to not work well.
1 code implementation • 15 Apr 2022 • Chuang Liu, Yibing Zhan, Chang Li, Bo Du, Jia Wu, Wenbin Hu, Tongliang Liu, DaCheng Tao
Graph neural networks have emerged as a leading architecture for many graph-level tasks such as graph classification and graph generation with a notable improvement.
1 code implementation • 29 Mar 2022 • Xiaoqing Guo, Jie Liu, Tongliang Liu, Yixuan Yuan
By exploiting computational geometry analysis and properties of segmentation, we design three complementary regularizers, i. e. volume regularization, anchor guidance, convex guarantee, to approximate the true SimT.
1 code implementation • 28 Mar 2022 • Xiang An, Jiankang Deng, Jia Guo, Ziyong Feng, Xuhan Zhu, Jing Yang, Tongliang Liu
In each iteration, positive class centers and a random subset of negative class centers are selected to compute the margin-based softmax loss.
Ranked #1 on
Face Verification
on IJB-C
(TAR @ FAR=1e-4 metric)
1 code implementation • 8 Mar 2022 • Shikun Li, Xiaobo Xia, Shiming Ge, Tongliang Liu
In the selection process, by measuring the agreement between learned representations and given labels, we first identify confident examples that are exploited to build confident pairs.
Ranked #8 on
Image Classification
on mini WebVision 1.0
1 code implementation • 8 Mar 2022 • Shikun Li, Tongliang Liu, Jiyong Tan, Dan Zeng, Shiming Ge
This raises the following important question: how can we effectively use a small amount of trusted data to facilitate robust classifier learning from multiple annotators?
1 code implementation • ICLR 2022 • Yongqiang Chen, Han Yang, Yonggang Zhang, Kaili Ma, Tongliang Liu, Bo Han, James Cheng
Recently Graph Injection Attack (GIA) emerges as a practical attack scenario on Graph Neural Networks (GNNs), where the adversary can merely inject few malicious nodes instead of modifying existing nodes or edges, i. e., Graph Modification Attack (GMA).
no code implementations • 11 Feb 2022 • Yongqiang Chen, Yonggang Zhang, Han Yang, Kaili Ma, Binghui Xie, Tongliang Liu, Bo Han, James Cheng
Despite recent developments in using the invariance principle from causality to enable out-of-distribution (OOD) generalization on Euclidean data, e. g., images, studies on graph data are limited.
no code implementations • 30 Jan 2022 • Yexiong Lin, Yu Yao, Yuxuan Du, Jun Yu, Bo Han, Mingming Gong, Tongliang Liu
Algorithms which minimize the averaged loss have been widely designed for dealing with noisy labels.
1 code implementation • 7 Dec 2021 • Erdun Gao, Junjia Chen, Li Shen, Tongliang Liu, Mingming Gong, Howard Bondell
}$ In this paper, with the additive noise model assumption of data, we take the first step in developing a gradient-based learning framework named DAG-Shared Federated Causal Discovery (DS-FCD), which can learn the causal graph without directly touching local data and naturally handle the data heterogeneity.
no code implementations • 2 Dec 2021 • Joshua Yee Kim, Tongliang Liu, Kalina Yacef
Conversational analysis systems are trained using noisy human labels and often require heavy preprocessing during multi-modal feature extraction.
1 code implementation • NeurIPS 2021 • Jiahua Dong, Zhen Fang, Anjin Liu, Gan Sun, Tongliang Liu
To address these challenges, we develop a novel Confident-Anchor-induced multi-source-free Domain Adaptation (CAiDA) model, which is a pioneer exploration of knowledge adaptation from multiple source domains to the unlabeled target domain without any source data, but with only pre-trained source models.
no code implementations • 30 Nov 2021 • Zhaoqing Wang, Yu Lu, Qiang Li, Xunqiang Tao, Yandong Guo, Mingming Gong, Tongliang Liu
In addition, we present text-to-pixel contrastive learning to explicitly enforce the text feature similar to the related pixel-level features and dissimilar to the irrelevances.
Ranked #1 on
Referring Expression Segmentation
on RefCoCo val
no code implementations • 19 Nov 2021 • Xin Jin, Tianyu He, Zhiheng Yin, Xu Shen, Tongliang Liu, Xinchao Wang, Jianqiang Huang, Xian-Sheng Hua, Zhibo Chen
Unsupervised Person Re-identification (U-ReID) with pseudo labeling recently reaches a competitive performance compared to fully-supervised ReID methods based on modern clustering algorithms.
1 code implementation • ICLR 2022 • Jiaheng Wei, Zhaowei Zhu, Hao Cheng, Tongliang Liu, Gang Niu, Yang Liu
These observations require us to rethink the treatment of noisy labels, and we hope the availability of these two datasets would facilitate the development and evaluation of future learning with noisy label solutions.
no code implementations • 29 Sep 2021 • Xiaobo Xia, Bo Han, Yibing Zhan, Jun Yu, Mingming Gong, Chen Gong, Tongliang Liu
The sample selection approach is popular in learning with noisy labels, which tends to select potentially clean data out of noisy data for robust training.
no code implementations • 29 Sep 2021 • Xuefeng Du, Tian Bian, Yu Rong, Bo Han, Tongliang Liu, Tingyang Xu, Wenbing Huang, Junzhou Huang
Semi-supervised node classification on graphs is a fundamental problem in graph mining that uses a small set of labeled nodes and many unlabeled nodes for training, so that its performance is quite sensitive to the quality of the node labels.
no code implementations • 29 Sep 2021 • Dawei Zhou, Nannan Wang, Bo Han, Tongliang Liu
Deep neural networks have been demonstrated to be vulnerable to adversarial noise, promoting the development of defense against adversarial attacks.
no code implementations • 29 Sep 2021 • Jianing Zhu, Jiangchao Yao, Tongliang Liu, Kunyang Jia, Jingren Zhou, Bo Han, Hongxia Yang
Federated Adversarial Training (FAT) helps us address the data privacy and governance issues, meanwhile maintains the model robustness to the adversarial attack.
no code implementations • 29 Sep 2021 • Jiaheng Wei, Hangyu Liu, Tongliang Liu, Gang Niu, Yang Liu
It was shown that LS serves as a regularizer for training data with hard labels and therefore improves the generalization of the model.
Ranked #10 on
Learning with noisy labels
on CIFAR-10N-Worst
no code implementations • ICLR 2022 • Haoang Chi, Feng Liu, Wenjing Yang, Long Lan, Tongliang Liu, Bo Han, Gang Niu, Mingyuan Zhou, Masashi Sugiyama
In this paper, we demystify assumptions behind L2DNC and find that high-level semantic features should be shared among the seen and unseen classes.
no code implementations • ICLR 2022 • Fei Zhang, Lei Feng, Bo Han, Tongliang Liu, Gang Niu, Tao Qin, Masashi Sugiyama
As the first contribution, we empirically show that the class activation map (CAM), a simple technique for discriminating the learning patterns of each class in images, is surprisingly better at making accurate predictions than the model itself on selecting the true label from candidate labels.
no code implementations • 29 Sep 2021 • Chuang Liu, Yibing Zhan, Baosheng Yu, Liu Liu, Bo Du, Wenbin Hu, Tongliang Liu
Graph pooling is essential in learning effective graph-level representations.
no code implementations • 29 Sep 2021 • Xin Jin, Tianyu He, Xu Shen, Songhua Wu, Tongliang Liu, Xinchao Wang, Jianqiang Huang, Zhibo Chen, Xian-Sheng Hua
In this paper, we propose an embarrassing simple yet highly effective adversarial domain adaptation (ADA) method for effectively training models for alignment.
no code implementations • 29 Sep 2021 • Yu Yao, Xuefeng Li, Tongliang Liu, Alan Blair, Mingming Gong, Bo Han, Gang Niu, Masashi Sugiyama
Existing methods for learning with noisy labels can be generally divided into two categories: (1) sample selection and label correction based on the memorization effect of neural networks; (2) loss correction with the transition matrix.
no code implementations • 29 Sep 2021 • Chaojian Yu, Bo Han, Mingming Gong, Li Shen, Shiming Ge, Bo Du, Tongliang Liu
In this paper, we propose such a criterion, namely Loss Stationary Condition (LSC) for constrained perturbation.
no code implementations • ICLR 2022 • Yu Yao, Tongliang Liu, Bo Han, Mingming Gong, Gang Niu, Masashi Sugiyama, DaCheng Tao
Hitherto, the distributional-assumption-free CPE methods rely on a critical assumption that the support of the positive data distribution cannot be contained in the support of the negative data distribution.
no code implementations • 21 Sep 2021 • Dawei Zhou, Nannan Wang, Bo Han, Tongliang Liu
Deep neural networks have been demonstrated to be vulnerable to adversarial noise, promoting the development of defense against adversarial attacks.
1 code implementation • NeurIPS 2021 • Yu Yao, Tongliang Liu, Mingming Gong, Bo Han, Gang Niu, Kun Zhang
In particular, we show that properly modeling the instances will contribute to the identifiability of the label noise transition matrix and thus lead to a better classifier.
no code implementations • 19 Jul 2021 • Zhaoqing Wang, Qiang Li, Guoxin Zhang, Pengfei Wan, Wen Zheng, Nannan Wang, Mingming Gong, Tongliang Liu
By considering the spatial correspondence, dense self-supervised representation learning has achieved superior performance on various dense prediction tasks.
no code implementations • 10 Jul 2021 • Xiaobo Xia, Shuo Shan, Mingming Gong, Nannan Wang, Fei Gao, Haikun Wei, Tongliang Liu
Estimating the kernel mean in a reproducing kernel Hilbert space is a critical component in many kernel learning algorithms.
1 code implementation • CVPR 2021 • Zhen Huang, Xu Shen, Jun Xing, Tongliang Liu, Xinmei Tian, Houqiang Li, Bing Deng, Jianqiang Huang, Xian-Sheng Hua
The inheritance part is learned with a similarity loss to transfer the existing learned knowledge from the teacher model to the student model, while the exploration part is encouraged to learn representations different from the inherited ones with a dis-similarity loss.
1 code implementation • NeurIPS 2021 • Yingbin Bai, Erkun Yang, Bo Han, Yanhua Yang, Jiatong Li, Yinian Mao, Gang Niu, Tongliang Liu
Instead of the early stopping, which trains a whole DNN all at once, we initially train former DNN layers by optimizing the DNN with a relatively large number of epochs.
1 code implementation • NeurIPS 2021 • Qizhou Wang, Feng Liu, Bo Han, Tongliang Liu, Chen Gong, Gang Niu, Mingyuan Zhou, Masashi Sugiyama
Reweighting adversarial data during training has been recently shown to improve adversarial robustness, where data closer to the current decision boundaries are regarded as more critical and given larger weights.
no code implementations • 14 Jun 2021 • Xuefeng Du, Tian Bian, Yu Rong, Bo Han, Tongliang Liu, Tingyang Xu, Wenbing Huang, Junzhou Huang
Semi-supervised node classification, as a fundamental problem in graph learning, leverages unlabeled nodes along with a small portion of labeled nodes for training.
no code implementations • ICLR 2022 • Yonggang Zhang, Mingming Gong, Tongliang Liu, Gang Niu, Xinmei Tian, Bo Han, Bernhard Schölkopf, Kun Zhang
The spurious correlation implies that the adversarial distribution is constructed via making the statistical conditional association between style information and labels drastically different from that in natural distribution.
no code implementations • NeurIPS 2021 • Haoang Chi, Feng Liu, Wenjing Yang, Long Lan, Tongliang Liu, Bo Han, William K. Cheung, James T. Kwok
To this end, we propose a target orientated hypothesis adaptation network (TOHAN) to solve the FHA problem, where we generate highly-compatible unlabeled data (i. e., an intermediate domain) to help train a target-domain classifier.
no code implementations • 11 Jun 2021 • Chenhong Zhou, Feng Liu, Chen Gong, Tongliang Liu, Bo Han, William Cheung
However, in an open world, the unlabeled test images probably contain unknown categories and have different distributions from the labeled images.
no code implementations • 10 Jun 2021 • Dawei Zhou, Nannan Wang, Xinbo Gao, Bo Han, Jun Yu, Xiaoyu Wang, Tongliang Liu
However, pre-processing methods may suffer from the robustness degradation effect, in which the defense reduces rather than improving the adversarial robustness of a target model in a white-box setting.
no code implementations • 9 Jun 2021 • Dawei Zhou, Tongliang Liu, Bo Han, Nannan Wang, Chunlei Peng, Xinbo Gao
However, given the continuously evolving attacks, models trained on seen types of adversarial examples generally cannot generalize well to unseen types of adversarial examples.
1 code implementation • ICLR 2022 • Jianing Zhu, Jiangchao Yao, Bo Han, Jingfeng Zhang, Tongliang Liu, Gang Niu, Jingren Zhou, Jianliang Xu, Hongxia Yang
However, when considering adversarial robustness, teachers may become unreliable and adversarial distillation may not work: teachers are pretrained on their own adversarial data, and it is too demanding to require that teachers are also good at every adversarial data queried by students.
no code implementations • 8 Jun 2021 • Jiaheng Wei, Hangyu Liu, Tongliang Liu, Gang Niu, Masashi Sugiyama, Yang Liu
We provide understandings for the properties of LS and NLS when learning with noisy labels.
Ranked #4 on
Learning with noisy labels
on CIFAR-10N-Aggregate
no code implementations • NeurIPS 2021 • Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Jun Yu, Gang Niu, Masashi Sugiyama
In this way, we also give large-loss but less selected data a try; then, we can better distinguish between the cases (a) and (b) by seeing if the losses effectively decrease with the uncertainty after the try.
Ranked #20 on
Image Classification
on mini WebVision 1.0
no code implementations • 1 Jun 2021 • Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Jun Yu, Gang Niu, Masashi Sugiyama
Lots of approaches, e. g., loss correction and label correction, cannot handle such open-set noisy labels well, since they need training data and test data to share the same label space, which does not hold for learning with open-set noisy labels.
no code implementations • 31 May 2021 • Jingfeng Zhang, Xilie Xu, Bo Han, Tongliang Liu, Gang Niu, Lizhen Cui, Masashi Sugiyama
Adversarial training (AT) based on minimax optimization is a popular learning style that enhances the model's adversarial robustness.
no code implementations • 27 May 2021 • Shuo Yang, Erkun Yang, Bo Han, Yang Liu, Min Xu, Gang Niu, Tongliang Liu
Traditionally, the transition from clean distribution to noisy distribution (i. e., clean label transition matrix) has been widely exploited to learn a clean label classifier by employing the noisy data.
no code implementations • 22 Apr 2021 • Lie Ju, Xin Wang, Lin Wang, Tongliang Liu, Xin Zhao, Tom Drummond, Dwarikanath Mahapatra, ZongYuan Ge
For example, there are estimated more than 40 different kinds of retinal diseases with variable morbidity, however with more than 30+ conditions are very rare from the global patient cohorts, which results in a typical long-tailed learning problem for deep learning-based screening models.
no code implementations • ICCV 2021 • Dawei Zhou, Nannan Wang, Chunlei Peng, Xinbo Gao, Xiaoyu Wang, Jun Yu, Tongliang Liu
Then, we train a denoising model to minimize the distances between the adversarial examples and the natural examples in the class activation feature space.
no code implementations • 17 Mar 2021 • Qizhou Wang, Jiangchao Yao, Chen Gong, Tongliang Liu, Mingming Gong, Hongxia Yang, Bo Han
Most of the previous approaches in this area focus on the pairwise relation (casual or correlational relationship) with noise, such as learning with noisy labels.
no code implementations • 1 Mar 2021 • Shijun Cai, Seok-Hee Hong, Jialiang Shen, Tongliang Liu
In this paper, we present the first machine learning approach for predicting human preference for graph layouts.
no code implementations • 28 Feb 2021 • Lie Ju, Xin Wang, Lin Wang, Dwarikanath Mahapatra, Xin Zhao, Mehrtash Harandi, Tom Drummond, Tongliang Liu, ZongYuan Ge
In this paper, we systematically discuss and define the two common types of label noise in medical images - disagreement label noise from inconsistency expert opinions and single-target label noise from wrong diagnosis record.
no code implementations • 8 Feb 2021 • Haoang Chi, Feng Liu, Wenjing Yang, Long Lan, Tongliang Liu, Bo Han, Gang Niu, Mingyuan Zhou, Masashi Sugiyama
In learning to discover novel classes (L2DNC), we are given labeled data from seen classes and unlabeled data from unseen classes, and we train clustering models for the unseen classes.
no code implementations • 6 Feb 2021 • Jianing Zhu, Jingfeng Zhang, Bo Han, Tongliang Liu, Gang Niu, Hongxia Yang, Mohan Kankanhalli, Masashi Sugiyama
A recent adversarial training (AT) study showed that the number of projected gradient descent (PGD) steps to successfully attack a point (i. e., find an adversarial example in its proximity) is an effective measure of the robustness of this point.
1 code implementation • 4 Feb 2021 • Xuefeng Li, Tongliang Liu, Bo Han, Gang Niu, Masashi Sugiyama
In label-noise learning, the transition matrix plays a key role in building statistically consistent classifiers.
Ranked #10 on
Learning with noisy labels
on CIFAR-100N
1 code implementation • 3 Feb 2021 • Xuefeng Du, Jingfeng Zhang, Bo Han, Tongliang Liu, Yu Rong, Gang Niu, Junzhou Huang, Masashi Sugiyama
In adversarial training (AT), the main focus has been the objective and optimizer while the model has been less studied, so that the models being used are still those classic ones in standard training (ST).
1 code implementation • 14 Jan 2021 • Qizhou Wang, Bo Han, Tongliang Liu, Gang Niu, Jian Yang, Chen Gong
The drastic increase of data quantity often brings the severe decrease of data quality, such as incorrect label annotations, which poses a great challenge for robustly training Deep Neural Networks (DNNs).
no code implementations • 1 Jan 2021 • Chenwei Ding, Biwei Huang, Mingming Gong, Kun Zhang, Tongliang Liu, DaCheng Tao
Most algorithms in causal discovery consider a single domain with a fixed distribution.
no code implementations • 1 Jan 2021 • Dawei Zhou, Tongliang Liu, Bo Han, Nannan Wang, Xinbo Gao
Motivated by this observation, we propose a defense framework ADD-Defense, which extracts the invariant information called \textit{perturbation-invariant representation} (PIR) to defend against widespread adversarial examples.
no code implementations • ICLR 2021 • Xiaobo Xia, Tongliang Liu, Bo Han, Chen Gong, Nannan Wang, ZongYuan Ge, Yi Chang
The \textit{early stopping} method therefore can be exploited for learning with noisy labels.
Ranked #26 on
Image Classification
on mini WebVision 1.0
(ImageNet Top-1 Accuracy metric)
no code implementations • ICCV 2021 • Yingbin Bai, Tongliang Liu
To extract hard confident examples that contain non-simple patterns and are entangled with the inaccurately labeled examples, we borrow the idea of momentum from physics.
no code implementations • 1 Jan 2021 • Bingbing Song, wei he, Renyang Liu, Shui Yu, Ruxin Wang, Mingming Gong, Tongliang Liu, Wei Zhou
Several state-of-the-arts start from improving the inter-class separability of training samples by modifying loss functions, where we argue that the adversarial samples are ignored and thus limited robustness to adversarial attacks is resulted.
1 code implementation • CVPR 2021 • Zhaowei Zhu, Tongliang Liu, Yang Liu
We first provide evidences that the heterogeneous instance-dependent label noise is effectively down-weighting the examples with higher noise rates in a non-uniform way and thus causes imbalances, rendering the strategy of directly applying methods for class-dependent label noise questionable.
no code implementations • 10 Dec 2020 • Guoqing Bao, Huai Chen, Tongliang Liu, Guanzhong Gong, Yong Yin, Lisheng Wang, Xiuying Wang
In this paper, we present an end-to-end multitask learning (MTL) framework (COVID-MTL) that is capable of automated and simultaneous detection (against both radiology and NAT) and severity assessment of COVID-19.
no code implementations • 2 Dec 2020 • Xiaobo Xia, Tongliang Liu, Bo Han, Nannan Wang, Jiankang Deng, Jiatong Li, Yinian Mao
The traditional transition matrix is limited to model closed-set label noise, where noisy training data has true class labels within the noisy label set.
1 code implementation • NeurIPS 2020 • Shanshan Zhao, Mingming Gong, Tongliang Liu, Huan Fu, DaCheng Tao
To arrive at this, some methods introduce a domain discriminator through adversarial learning to match the feature distributions in multiple source domains.
Ranked #18 on
Domain Generalization
on PACS
1 code implementation • 9 Nov 2020 • Bo Han, Quanming Yao, Tongliang Liu, Gang Niu, Ivor W. Tsang, James T. Kwok, Masashi Sugiyama
Classical machine learning implicitly assumes that labels of the training data are sampled from a clean distribution, which can be too restrictive for real-world scenarios.
2 code implementations • 22 Oct 2020 • Ruize Gao, Feng Liu, Jingfeng Zhang, Bo Han, Tongliang Liu, Gang Niu, Masashi Sugiyama
However, it has been shown that the MMD test is unaware of adversarial attacks -- the MMD test failed to detect the discrepancy between natural and adversarial data.
no code implementations • 13 Oct 2020 • He-Liang Huang, Yuxuan Du, Ming Gong, YouWei Zhao, Yulin Wu, Chaoyue Wang, Shaowei Li, Futian Liang, Jin Lin, Yu Xu, Rui Yang, Tongliang Liu, Min-Hsiu Hsieh, Hui Deng, Hao Rong, Cheng-Zhi Peng, Chao-Yang Lu, Yu-Ao Chen, DaCheng Tao, Xiaobo Zhu, Jian-Wei Pan
For the first time, we experimentally achieve the learning and generation of real-world hand-written digit images on a superconducting quantum processor.
no code implementations • 28 Sep 2020 • Songhua Wu, Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Nannan Wang, Haifeng Liu, Gang Niu
It is worthwhile to perform the transformation: We prove that the noise rate for the noisy similarity labels is lower than that of the noisy class labels, because similarity labels themselves are robust to noise.
no code implementations • 23 Jul 2020 • Yuxuan Du, Min-Hsiu Hsieh, Tongliang Liu, Shan You, DaCheng Tao
The main contribution of this paper is devising a quantum DP Lasso estimator to earn the runtime speedup with the privacy preservation, i. e., the runtime complexity is $\tilde{O}(N^{3/2}\sqrt{d})$ with a nearly optimal utility bound $\tilde{O}(1/N^{2/3})$, where $N$ is the sample size and $d$ is the data dimension with $N\ll d$.
no code implementations • 3 Jul 2020 • Xinpeng Ding, Nannan Wang, Xinbo Gao, Jie Li, Xiaoyu Wang, Tongliang Liu
Specifically, we devise a partial segment loss regarded as a loss sampling to learn integral action parts from labeled segments.
Weakly-supervised Temporal Action Localization
Weakly Supervised Temporal Action Localization
1 code implementation • NeurIPS 2020 • Xiaobo Xia, Tongliang Liu, Bo Han, Nannan Wang, Mingming Gong, Haifeng Liu, Gang Niu, DaCheng Tao, Masashi Sugiyama
Learning with the \textit{instance-dependent} label noise is challenging, because it is hard to model such real-world noise.
1 code implementation • NeurIPS 2020 • Yu Yao, Tongliang Liu, Bo Han, Mingming Gong, Jiankang Deng, Gang Niu, Masashi Sugiyama
By this intermediate class, the original transition matrix can then be factorized into the product of two easy-to-estimate transition matrices.
no code implementations • 14 Jun 2020 • Songhua Wu, Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Nannan Wang, Haifeng Liu, Gang Niu
To give an affirmative answer, in this paper, we propose a framework called Class2Simi: it transforms data points with noisy class labels to data pairs with noisy similarity labels, where a similarity label denotes whether a pair shares the class label or not.
no code implementations • 7 Apr 2020 • Maoying Qiao, Tongliang Liu, Jun Yu, Wei Bian, DaCheng Tao
To alleviate this problem, in this paper, a repulsiveness-encouraging prior is introduced among mixing components and a diversified EPCA mixture (DEPCAM) model is developed in the Bayesian framework.
no code implementations • 20 Mar 2020 • Yuxuan Du, Min-Hsiu Hsieh, Tongliang Liu, DaCheng Tao, Nana Liu
This robustness property is intimately connected with an important security concept called differential privacy which can be extended to quantum differential privacy.
no code implementations • 16 Feb 2020 • Songhua Wu, Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Nannan Wang, Haifeng Liu, Gang Niu
We further estimate the transition matrix from only noisy data and build a novel learning system to learn a classifier which can assign noise-free class labels for instances.
no code implementations • 10 Feb 2020 • Yu Yao, Tongliang Liu, Bo Han, Mingming Gong, Gang Niu, Masashi Sugiyama, DaCheng Tao
It is worthwhile to change the problem: we prove that if the assumption holds, our method will not affect anything; if the assumption does not hold, the bias from problem changing is less than the bias from violation of the irreducible assumption in the original problem.
no code implementations • 11 Jan 2020 • Antonin Berthon, Bo Han, Gang Niu, Tongliang Liu, Masashi Sugiyama
We find with the help of confidence scores, the transition distribution of each instance can be approximately estimated.
no code implementations • 15 Dec 2019 • Zhe Chen, Wanli Ouyang, Tongliang Liu, DaCheng Tao
Alternatively, to access much more natural-looking pedestrians, we propose to augment pedestrian detection datasets by transforming real pedestrians from the same dataset into different shapes.
no code implementations • NeurIPS 2019 • Fengxiang He, Tongliang Liu, DaCheng Tao
Specifically, we prove a PAC-Bayes generalization bound for neural networks trained by SGD, which has a positive correlation with the ratio of batch size to learning rate.
1 code implementation • 28 Nov 2019 • Xu Shen, Xinmei Tian, Tongliang Liu, Fang Xu, DaCheng Tao
On the one hand, continuous dropout is considerably closer to the activation characteristics of neurons in the human brain than traditional binary dropout.
no code implementations • 20 Nov 2019 • Jingfeng Zhang, Bo Han, Gang Niu, Tongliang Liu, Masashi Sugiyama
Deep neural networks (DNNs) are incredibly brittle due to adversarial examples.
1 code implementation • 31 Jul 2019 • Yihang Lou, Ling-Yu Duan, Yong Luo, Ziqian Chen, Tongliang Liu, Shiqi Wang, Wen Gao
The digital retina in smart cities is to select what the City Eye tells the City Brain, and convert the acquired visual data from front-end visual sensors to features in an intelligent sensing manner.
no code implementations • 16 Jul 2019 • Yuxuan Du, Min-Hsiu Hsieh, Tongliang Liu, DaCheng Tao
In this paper, we propose a sublinear classical algorithm to tackle general minimum conical hull problems when the input has stored in a sample-based low-overhead data structure.
no code implementations • 2 Jun 2019 • Naiyang Guan, Tongliang Liu, Yangmuzi Zhang, DaCheng Tao, Larry S. Davis
Non-negative matrix factorization (NMF) minimizes the Euclidean distance between the data matrix and its low rank approximation, and it fails when applied to corrupted data because the loss function is sensitive to outliers.
1 code implementation • NeurIPS 2019 • Xiaobo Xia, Tongliang Liu, Nannan Wang, Bo Han, Chen Gong, Gang Niu, Masashi Sugiyama
Existing theories have shown that the transition matrix can be learned by exploiting \textit{anchor points} (i. e., data points that belong to a specific class almost surely).
Ranked #14 on
Learning with noisy labels
on CIFAR-10N-Aggregate
1 code implementation • 15 May 2019 • Kui Jia, Shuai Li, Yuxin Wen, Tongliang Liu, DaCheng Tao
To this end, we first prove that DNNs are of local isometry on data distributions of practical interest; by using a new covering of the sample space and introducing the local isometry property of DNNs into generalization analysis, we establish a new generalization error bound that is both scale- and range-sensitive to singular value spectrum of each of networks' weight matrices.
no code implementations • CVPR 2019 • Erkun Yang, Tongliang Liu, Cheng Deng, Wei Liu, DaCheng Tao
To address this issue, we propose a novel deep unsupervised hashing model, dubbed DistillHash, which can learn a distilled data set consisted of data pairs, which have confidence similarity signals.
no code implementations • 13 Apr 2019 • Kede Ma, Wentao Liu, Tongliang Liu, Zhou Wang, DaCheng Tao
One of the biggest challenges in learning BIQA models is the conflict between the gigantic image space (which is in the dimension of the number of image pixels) and the extremely limited reliable ground truth data for training.
no code implementations • 8 Apr 2019 • Yong Luo, Yonggang Wen, Tongliang Liu, DaCheng Tao
Some existing heterogeneous transfer learning (HTL) approaches can learn target distance metric by usually transforming the samples of source and target domain into a common subspace.
1 code implementation • 8 Apr 2019 • Tao Lei, Xiaohong Jia, Tongliang Liu, Shigang Liu, Hongy-ing Meng, Asoke K. Nandi
However, MR might mistakenly filter meaningful seeds that are required for generating accurate segmentation and it is also sensitive to the scale because a single-scale structuring element is employed.
no code implementations • 8 Apr 2019 • Yong Luo, Tongliang Liu, DaCheng Tao, Chao Xu
In particular, DTDML learns a sparse combination of the base metrics to construct the target metric by forcing the target metric to be close to an integration of the source metrics.
no code implementations • 8 Apr 2019 • Yong Luo, Tongliang Liu, DaCheng Tao, Chao Xu
Therefore, we propose to weightedly combine the MC outputs of different views, and present the multi-view matrix completion (MVMC) framework for transductive multi-label image classification.
no code implementations • 7 Apr 2019 • Jie Gui, Tongliang Liu, Zhenan Sun, DaCheng Tao, Tieniu Tan
In SDHR, the regression target is instead optimized.
no code implementations • 7 Apr 2019 • Jie Gui, Tongliang Liu, Zhenan Sun, DaCheng Tao, Tieniu Tan
Rather than adopting this method, FSDH uses a very simple yet effective regression of the class labels of training examples to the corresponding hash code to accelerate the algorithm.
no code implementations • 5 Apr 2019 • Chen Gong, Tongliang Liu, Yuanyan Tang, Jian Yang, Jie Yang, DaCheng Tao
As a result, the intrinsic constraints among different candidate labels are deployed, and the disambiguated labels generated by RegISL are more discriminative and accurate than those output by existing instance-based algorithms.
no code implementations • 3 Apr 2019 • Ya Li, Xinmei Tian, Tongliang Liu, DaCheng Tao
The objective of our proposed method is to transform the features from different tasks into a common feature space in which the tasks are closely related and the shared parameters can be better optimized.
no code implementations • 2 Apr 2019 • Fengxiang He, Tongliang Liu, DaCheng Tao
This paper studies the influence of residual connections on the hypothesis complexity of the neural network in terms of the covering number of its hypothesis space.
no code implementations • 2 Apr 2019 • Yanwu Xu, Mingming Gong, Junxiang Chen, Tongliang Liu, Kun Zhang, Kayhan Batmanghelich
The success of such approaches heavily depends on high-quality labeled instances, which are not easy to obtain, especially as the number of candidate classes increases.
1 code implementation • 21 Jan 2019 • Yanwu Xu, Mingming Gong, Tongliang Liu, Kayhan Batmanghelich, Chaohui Wang
In recent years, the learned local descriptors have outperformed handcrafted ones by a large margin, due to the powerful deep convolutional neural network architectures such as L2-Net [1] and triplet based metric learning [2].
no code implementations • 8 Nov 2018 • Jingwei Zhang, Tongliang Liu, DaCheng Tao
We derive upper bounds on the generalization error of learning algorithms based on their \emph{algorithmic transport cost}: the expected Wasserstein distance between the output hypothesis and the output hypothesis conditioned on an input example.
no code implementations • 29 Oct 2018 • Yuxuan Du, Min-Hsiu Hsieh, Tongliang Liu, DaCheng Tao
Parameterized quantum circuits (PQCs) have been broadly used as a hybrid quantum-classical machine learning scheme to accomplish generative tasks.
no code implementations • 17 Sep 2018 • Yuxuan Du, Min-Hsiu Hsieh, Tongliang Liu, DaCheng Tao
Ultimately, a stronger nonlinear classifier can be established, the so-called quantum ensemble learning (QEL), by combining a set of weak VQPs produced using a subsampling method.
no code implementations • ECCV 2018 • Ya Li, Xinmei Tian, Mingming Gong, Yajing Liu, Tongliang Liu, Kun Zhang, DaCheng Tao
Under the assumption that the conditional distribution $P(Y|X)$ remains unchanged across domains, earlier approaches to domain generalization learned the invariant representation $T(X)$ by minimizing the discrepancy of the marginal distribution $P(T(X))$.
Ranked #42 on
Domain Generalization
on PACS
1 code implementation • ECCV 2018 • Baosheng Yu, Tongliang Liu, Mingming Gong, Changxing Ding, DaCheng Tao
Considering that the number of triplets grows cubically with the size of training data, triplet mining is thus indispensable for efficiently training with triplet loss.
no code implementations • 7 Aug 2018 • Fengxiang He, Tongliang Liu, Geoffrey I. Webb, DaCheng Tao
Specifically, by treating the unlabelled data as noisy negative examples, we could automatically label a group positive and negative examples whose labels are identical to the ones assigned by a Bayesian optimal classifier with a consistency guarantee.
1 code implementation • 23 Jul 2018 • Ya Li, Mingming Gong, Xinmei Tian, Tongliang Liu, DaCheng Tao
With the conditional invariant representation, the invariance of the joint distribution $\mathbb{P}(h(X), Y)$ can be guaranteed if the class prior $\mathbb{P}(Y)$ does not change across training and test domains.
no code implementations • CVPR 2018 • Xiyu Yu, Tongliang Liu, Mingming Gong, Kayhan Batmanghelich, DaCheng Tao
In this paper, we study the mixture proportion estimation (MPE) problem in a new setting: given samples from the mixture and the component distributions, we identify the proportions of the components in the mixture distribution.
no code implementations • 27 May 2018 • Yuxuan Du, Tongliang Liu, DaCheng Tao
Parameterized quantum circuits (PQCs), as one of the most promising schemes to realize quantum machine learning algorithms on near-term quantum computers, have been designed to solve machine earning tasks with quantum advantages.
Quantum Physics
1 code implementation • IJCAI2018 2018 • Erkun Yang, Cheng Deng, Tongliang Liu, Wei Liu, DaCheng Tao
Hashing is becoming increasingly popular for approximate nearest neighbor searching in massive databases due to its storage and search efficiency.
no code implementations • 24 Apr 2018 • Jingwei Zhang, Tongliang Liu, DaCheng Tao
This upper bound shows that as the number of convolutional and pooling layers $L$ increases in the network, the expected generalization error will decrease exponentially to zero.
no code implementations • 11 Feb 2018 • Jingwei Zhang, Tongliang Liu, DaCheng Tao
We study the rates of convergence from empirical surrogate risk minimizers to the Bayes optimal classifier.
1 code implementation • ECCV 2018 • Xiyu Yu, Tongliang Liu, Mingming Gong, DaCheng Tao
We therefore reason that the transition probabilities will be different.
no code implementations • ICML 2020 • Jiacheng Cheng, Tongliang Liu, Kotagiri Ramamohanarao, DaCheng Tao
Inspired by the idea of learning with distilled examples, we then propose a learning algorithm with theoretical guarantees for its robustness to BILN.
no code implementations • 31 Jul 2017 • Xiyu Yu, Tongliang Liu, Mingming Gong, Kun Zhang, Kayhan Batmanghelich, DaCheng Tao
However, when learning this invariant knowledge, existing methods assume that the labels in source domain are uncontaminated, while in reality, we often have access to source data with noisy labels.
no code implementations • CVPR 2017 • Xiyu Yu, Tongliang Liu, Xinchao Wang, DaCheng Tao
Deep compression refers to removing the redundancy of parameters and feature maps for deep learning models.
no code implementations • ICML 2017 • Tongliang Liu, Gábor Lugosi, Gergely Neu, DaCheng Tao
The bounds are based on martingale inequalities in the Banach space to which the hypotheses belong.
no code implementations • 5 Dec 2016 • Kede Ma, Huan Fu, Tongliang Liu, Zhou Wang, DaCheng Tao
The human visual system excels at detecting local blur of visual images, but the underlying mechanism is not well understood.
no code implementations • 3 Mar 2016 • Qingshan Liu, Yubao Sun, Cantian Wang, Tongliang Liu, DaCheng Tao
In the second step, hypergraph is used to represent the high order relationships between each datum and its prominent samples by regarding them as a hyperedge.
no code implementations • 3 Jan 2016 • Tongliang Liu, DaCheng Tao, Dong Xu
Can we obtain dimensionality-dependent generalization bounds for $k$-dimensional coding schemes that are tighter than dimensionality-independent bounds when data is in a finite-dimensional feature space?
no code implementations • 27 Nov 2014 • Tongliang Liu, DaCheng Tao
In this scenario, there is an unobservable sample with noise-free labels.
no code implementations • 26 Oct 2014 • Chang Xu, Tongliang Liu, DaCheng Tao, Chao Xu
We analyze the local Rademacher complexity of empirical risk minimization (ERM)-based multi-label learning algorithms, and in doing so propose a new algorithm for multi-label learning.