no code implementations • ICML 2020 • Jiaxian Guo, Mingming Gong, Tongliang Liu, Kun Zhang, DaCheng Tao
Distribution shift is a major obstacle to the deployment of current deep learning models on real-world problems.
no code implementations • ICML 2020 • Yonggang Zhang, Ya Li, Tongliang Liu, Xinmei Tian
To obtain sufficient knowledge for crafting adversarial examples, previous methods query the target model with inputs that are perturbed with different searching directions.
no code implementations • ICML 2020 • Xiyu Yu, Tongliang Liu, Mingming Gong, Kun Zhang, Kayhan Batmanghelich, DaCheng Tao
Domain adaptation aims to correct the classifiers when faced with distribution shift between source (training) and target (test) domains.
2 code implementations • ECCV 2020 • Jiankang Deng, Jia Guo, Tongliang Liu, Mingming Gong, Stefanos Zafeiriou
In this paper, we relax the intra-class constraint of ArcFace to improve the robustness to label noise.
1 code implementation • 5 Dec 2023 • Zhuo Huang, Chang Liu, Yinpeng Dong, Hang Su, Shibao Zheng, Tongliang Liu
Concretely, by estimating a transition matrix that captures the probability of one class being confused with another, an instruction containing a correct exemplar and an erroneous one from the most probable noisy class can be constructed.
no code implementations • 15 Nov 2023 • Xiaobo Xia, Jiale Liu, Shaokun Zhang, Qingyun Wu, Tongliang Liu
Motivated by this desideratum, for the first time, we pose the problem of "coreset selection with prioritized multiple objectives", in which the smallest coreset size under model performance constraints is explored.
1 code implementation • 6 Nov 2023 • Xuan Li, Zhanke Zhou, Jianing Zhu, Jiangchao Yao, Tongliang Liu, Bo Han
Despite remarkable success in various applications, large language models (LLMs) are vulnerable to adversarial jailbreaks that make the safety guardrails void.
1 code implementation • NeurIPS 2023 • Haotian Zheng, Qizhou Wang, Zhen Fang, Xiaobo Xia, Feng Liu, Tongliang Liu, Bo Han
To this end, we suggest that generated data (with mistaken OOD generation) can be used to devise an auxiliary OOD detection task to facilitate real OOD detection.
Out-of-Distribution Detection
Out of Distribution (OOD) Detection
+1
no code implementations • 25 Oct 2023 • Zhuo Huang, Muyang Li, Li Shen, Jun Yu, Chen Gong, Bo Han, Tongliang Liu
By fully exploring both variant and invariant parameters, our EVIL can effectively identify a robust subnetwork to improve OOD generalization.
1 code implementation • NeurIPS 2023 • Zhuo Huang, Li Shen, Jun Yu, Bo Han, Tongliang Liu
Therefore, the label guidance on labeled data is hard to be propagated to unlabeled data.
no code implementations • 16 Oct 2023 • Shaokun Zhang, Xiaobo Xia, Zhaoqing Wang, Ling-Hao Chen, Jiale Liu, Qingyun Wu, Tongliang Liu
However, since the prompts need to be sampled from a large volume of annotated examples, finding the right prompt may result in high annotation costs.
no code implementations • 13 Oct 2023 • Runqi Lin, Chaojian Yu, Bo Han, Tongliang Liu
In this work, we adopt a unified perspective by solely focusing on natural patterns to explore different types of overfitting.
1 code implementation • NeurIPS 2023 • Zhiqin Yang, Yonggang Zhang, Yu Zheng, Xinmei Tian, Hao Peng, Tongliang Liu, Bo Han
Comprehensive experiments demonstrate the efficacy of FedFed in promoting model performance.
no code implementations • 1 Oct 2023 • Chaojian Yu, Xiaolong Shi, Jun Yu, Bo Han, Tongliang Liu
Adversarial Training (AT) is a widely-used algorithm for building robust neural networks, but it suffers from the issue of robust overfitting, the fundamental mechanism of which remains unclear.
no code implementations • 29 Sep 2023 • Runnan Chen, Xinge Zhu, Nenglun Chen, Dawei Wang, Wei Li, Yuexin Ma, Ruigang Yang, Tongliang Liu, Wenping Wang
In this paper, we propose Model2Scene, a novel paradigm that learns free 3D scene representation from Computer-Aided Design (CAD) models and languages.
1 code implementation • 22 Sep 2023 • Shikun Li, Xiaobo Xia, Hansong Zhang, Shiming Ge, Tongliang Liu
However, estimating multi-label noise transition matrices remains a challenging task, as most existing estimators in noisy multi-class learning rely on anchor points and accurate fitting of noisy class posteriors, which is hard to satisfy in noisy multi-label learning.
no code implementations • 14 Sep 2023 • Liangchen Liu, Nannan Wang, Dawei Zhou, Xinbo Gao, Decheng Liu, Xi Yang, Tongliang Liu
This paper targets a novel trade-off problem in generalizable prompt learning for vision-language models (VLM), i. e., improving the performance on unseen classes while maintaining the performance on seen classes.
1 code implementation • 2 Sep 2023 • Xiaobo Xia, Pengqian Lu, Chen Gong, Bo Han, Jun Yu, Tongliang Liu
However, such a procedure is arguably debatable from two folds: (a) it does not consider the bad influence of noisy labels in selected small-loss examples; (b) it does not make good use of the discarded large-loss examples, which may be clean or have meaningful information for generalization.
no code implementations • 31 Aug 2023 • Enneng Yang, Zhenyi Wang, Li Shen, Nan Yin, Tongliang Liu, Guibing Guo, Xingwei Wang, DaCheng Tao
Next, we train the CL model by minimizing the gap between the responses of the CL model and the black-box API on synthetic data, to transfer the API's knowledge to the CL model.
1 code implementation • ICCV 2023 • Chengxin Liu, Hao Lu, Zhiguo Cao, Tongliang Liu
Such a querying process yields an intuitive, universal modeling of crowd as both the input and output are interpretable and steerable.
no code implementations • ICCV 2023 • Suqin Yuan, Lei Feng, Tongliang Liu
Sample selection is a prevalent method in learning with noisy labels, where small-loss data are typically considered as correctly labeled data.
no code implementations • 22 Aug 2023 • Yuxuan Du, Yibo Yang, Tongliang Liu, Zhouchen Lin, Bernard Ghanem, DaCheng Tao
Understanding the dynamics of large quantum systems is hindered by the curse of dimensionality.
1 code implementation • ICCV 2023 • Kaicheng Yang, Jiankang Deng, Xiang An, Jiawei Li, Ziyong Feng, Jia Guo, Jing Yang, Tongliang Liu
However, the presence of intrinsic noise and unmatched image-text pairs in web data can potentially affect the performance of representation learning.
no code implementations • 14 Aug 2023 • Hui Kang, Sheng Liu, Huaxi Huang, Tongliang Liu
In real-world datasets, noisy labels are pervasive.
1 code implementation • 26 Jul 2023 • Wenjie Xuan, Shanshan Zhao, Yu Yao, Juhua Liu, Tongliang Liu, Yixin Chen, Bo Du, DaCheng Tao
Exploiting the estimated noise transitions, our model, named PNT-Edge, is able to fit the prediction to clean labels.
no code implementations • 12 Jul 2023 • Ruijiang Dong, Feng Liu, Haoang Chi, Tongliang Liu, Mingming Gong, Gang Niu, Masashi Sugiyama, Bo Han
In this paper, we propose a diversity-enhancing generative network (DEG-Net) for the FHA problem, which can generate diverse unlabeled data with the help of a kernel independence measure: the Hilbert-Schmidt independence criterion (HSIC).
no code implementations • 11 Jul 2023 • Hui Kang, Sheng Liu, Huaxi Huang, Jun Yu, Bo Han, Dadong Wang, Tongliang Liu
In recent years, research on learning with noisy labels has focused on devising novel algorithms that can achieve robustness to noisy training labels while generalizing to clean data.
no code implementations • 3 Jul 2023 • Vinoth Nandakumar, Arush Tagade, Tongliang Liu
Over the past decade deep learning has revolutionized the field of computer vision, with convolutional neural network models proving to be very effective for image classification benchmarks.
1 code implementation • 30 Jun 2023 • Peng Mi, Li Shen, Tianhe Ren, Yiyi Zhou, Tianshuo Xu, Xiaoshuai Sun, Tongliang Liu, Rongrong Ji, DaCheng Tao
Sharpness-Aware Minimization (SAM) is a popular solution that smooths the loss landscape by minimizing the maximized change of training loss when adding a perturbation to the weight.
1 code implementation • 22 Jun 2023 • Chuang Liu, Yibing Zhan, Baosheng Yu, Liu Liu, Bo Du, Wenbin Hu, Tongliang Liu
A pooling operation is essential for effective graph-level representation learning, where the node drop pooling has become one mainstream graph pooling technology.
no code implementations • 20 Jun 2023 • Zixi Wei, Lei Feng, Bo Han, Tongliang Liu, Gang Niu, Xiaofeng Zhu, Heng Tao Shen
This motivates the study on classification from aggregate observations (CFAO), where the supervision is provided to groups of instances, instead of individual instances.
no code implementations • 12 Jun 2023 • Shiming Chen, Wenjin Hou, Ziming Hong, Xiaohan Ding, Yibing Song, Xinge You, Tongliang Liu, Kun Zhang
After alignment, synthesized sample features from unseen classes are closer to the real sample features and benefit DSP to improve existing generative ZSL methods by 8. 5\%, 8. 0\%, and 9. 7\% on the standard CUB, SUN AWA2 datasets, the significant performance improvement indicates that evolving semantic prototype explores a virgin field in ZSL.
no code implementations • 12 Jun 2023 • Yuhao Wu, Xiaobo Xia, Jun Yu, Bo Han, Gang Niu, Masashi Sugiyama, Tongliang Liu
Training a classifier exploiting a huge amount of supervised data is expensive or even prohibited in a situation, where the labeling cost is high.
no code implementations • 9 Jun 2023 • Shaoan Xie, Biwei Huang, Bin Gu, Tongliang Liu, Kun Zhang
The capacity to address counterfactual "what if" inquiries is crucial for understanding and making use of causal influences.
1 code implementation • 6 Jun 2023 • Jianing Zhu, Xiawei Guo, Jiangchao Yao, Chao Du, Li He, Shuo Yuan, Tongliang Liu, Liang Wang, Bo Han
In this paper, we dive into the perspective of model dynamics and propose a novel information measure, namely, Memorization Discrepancy, to explore the defense via the model-level information.
1 code implementation • 6 Jun 2023 • Jianing Zhu, Hengzhuang Li, Jiangchao Yao, Tongliang Liu, Jianliang Xu, Bo Han
Based on such insights, we propose a novel method, Unleashing Mask, which aims to restore the OOD discriminative capabilities of the well-trained model with ID data.
1 code implementation • NeurIPS 2023 • Runnan Chen, Youquan Liu, Lingdong Kong, Nenglun Chen, Xinge Zhu, Yuexin Ma, Tongliang Liu, Wenping Wang
For nuImages and nuScenes datasets, the performance is 22. 1\% and 26. 8\% with improvements of 3. 5\% and 6. 0\%, respectively.
1 code implementation • 5 Jun 2023 • Shikun Li, Xiaobo Xia, Jiankang Deng, Shiming Ge, Tongliang Liu
In real-world crowd-sourcing scenarios, noise transition matrices are both annotator- and instance-dependent.
2 code implementations • 31 May 2023 • Maoyuan Ye, Jing Zhang, Shanshan Zhao, Juhua Liu, Tongliang Liu, Bo Du, DaCheng Tao
On the other hand, based on the extensibility of DeepSolo, we launch DeepSolo++ for multilingual text spotting, making a further step to let Transformer decoder with explicit points solo for multilingual text detection, recognition, and script identification all at once.
Ranked #1 on
Text Spotting
on Inverse-Text
1 code implementation • 28 May 2023 • Jingfeng Zhang, Bo Song, Haohan Wang, Bo Han, Tongliang Liu, Lei Liu, Masashi Sugiyama
To address the challenge posed by BadLabel, we further propose a robust LNL method that perturbs the labels in an adversarial manner at each epoch to make the loss values of clean and noisy labels again distinguishable.
no code implementations • 18 May 2023 • Bochao Liu, Shiming Ge, Pengju Wang, Liansheng Zhuang, Tongliang Liu
In particular, we first train a model to fit the distribution of the training data and make it satisfy differential privacy by performing a randomized response mechanism during training process.
1 code implementation • ICML 2023 • Xue Jiang, Feng Liu, Zhen Fang, Hong Chen, Tongliang Liu, Feng Zheng, Bo Han
In this paper, we show that this assumption makes the above methods incapable when the ID model is trained with class-imbalanced data. Fortunately, by analyzing the causal relations between ID/OOD classes and features, we identify several common scenarios where the OOD-to-ID probabilities should be the ID-class-prior distribution and propose two strategies to modify existing inference-time detection methods: 1) replace the uniform distribution with the ID-class-prior distribution if they explicitly use the uniform distribution; 2) otherwise, reweight their scores according to the similarity between the ID-class-prior distribution and the softmax outputs of the pre-trained model.
Out-of-Distribution Detection
Out of Distribution (OOD) Detection
no code implementations • ICCV 2023 • Dongting Hu, Zhenkai Zhang, Tingbo Hou, Tongliang Liu, Huan Fu, Mingming Gong
Our approach includes a density Mip-VoG for scene geometry and a feature Mip-VoG with a small MLP for view-dependent color.
2 code implementations • 12 Apr 2023 • Xiang An, Jiankang Deng, Kaicheng Yang, Jaiwei Li, Ziyong Feng, Jia Guo, Jing Yang, Tongliang Liu
To further enhance the low-dimensional feature representation, we randomly select partial feature dimensions when calculating the similarities between embeddings and class-wise prototypes.
Ranked #1 on
Image Retrieval
on SOP
(using extra training data)
2 code implementations • CVPR 2023 • Zhuo Huang, Miaoxi Zhu, Xiaobo Xia, Li Shen, Jun Yu, Chen Gong, Bo Han, Bo Du, Tongliang Liu
Experimentally, we simulate photon-limited corruptions using CIFAR10/100 and ImageNet30 datasets and show that SharpDRO exhibits a strong generalization ability against severe corruptions and exceeds well-known baseline methods with large performance gains.
1 code implementation • CVPR 2023 • Shuo Yang, Zhaopan Xu, Kai Wang, Yang You, Hongxun Yao, Tongliang Liu, Min Xu
As one of the most fundamental techniques in multimodal learning, cross-modal matching aims to project various sensory modalities into a shared feature space.
no code implementations • 22 Mar 2023 • Jiaheng Wei, Zhaowei Zhu, Gang Niu, Tongliang Liu, Sijia Liu, Masashi Sugiyama, Yang Liu
Both long-tailed and noisily labeled data frequently appear in real-world applications and impose significant challenges for learning.
1 code implementation • 21 Mar 2023 • Xiu-Chuan Li, Xiaobo Xia, Fei Zhu, Tongliang Liu, Xu-Yao Zhang, Cheng-Lin Liu
Label noise poses a serious threat to deep neural networks (DNNs).
1 code implementation • CVPR 2023 • Zixuan Hu, Li Shen, Zhenyi Wang, Tongliang Liu, Chun Yuan, DaCheng Tao
The goal of data-free meta-learning is to learn useful prior knowledge from a collection of pre-trained models without accessing their training data.
1 code implementation • 9 Mar 2023 • Qizhou Wang, Junjie Ye, Feng Liu, Quanyu Dai, Marcus Kalander, Tongliang Liu, Jianye Hao, Bo Han
It leads to a min-max learning scheme -- searching to synthesize OOD data that leads to worst judgments and learning from such OOD data for uniform performance in OOD detection.
Ranked #11 on
Out-of-Distribution Detection
on ImageNet-1k vs Textures
no code implementations • 4 Mar 2023 • Jiren Mai, Fei Zhang, Junjie Ye, Marcus Kalander, Xian Zhang, Wankou Yang, Tongliang Liu, Bo Han
Motivated by this simple but effective learning pattern, we propose a General-Specific Learning Mechanism (GSLM) to explicitly drive a coarse-grained CAM to a fine-grained pseudo mask.
1 code implementation • 1 Mar 2023 • Jianing Zhu, Jiangchao Yao, Tongliang Liu, Quanming Yao, Jianliang Xu, Bo Han
Privacy and security concerns in real-world applications have led to the development of adversarially robust federated models.
1 code implementation • ICCV 2023 • Ling-Hao Chen, Jiawei Zhang, Yewen Li, Yiren Pang, Xiaobo Xia, Tongliang Liu
In the training stage, we learn a motion diffusion model that generates motions from random noise.
no code implementations • ICCV 2023 • Xiaobo Xia, Bo Han, Yibing Zhan, Jun Yu, Mingming Gong, Chen Gong, Tongliang Liu
As selected data have high discrepancies in probabilities, the divergence of two networks can be maintained by training on such data.
no code implementations • ICCV 2023 • Xiaobo Xia, Jiankang Deng, Wei Bao, Yuxuan Du, Bo Han, Shiguang Shan, Tongliang Liu
The issues are, that we do not understand why label dependence is helpful in the problem, and how to learn and utilize label dependence only using training data with noisy multiple labels.
no code implementations • ICCV 2023 • Huaxi Huang, Hui Kang, Sheng Liu, Olivier Salvado, Thierry Rakotoarivelo, Dadong Wang, Tongliang Liu
The early stopping strategy averts updating CNNs during the early training phase and is widely employed in the presence of noisy labels.
2 code implementations • CVPR 2023 • Maoyuan Ye, Jing Zhang, Shanshan Zhao, Juhua Liu, Tongliang Liu, Bo Du, DaCheng Tao
In this paper, we present DeepSolo, a simple DETR-like baseline that lets a single Decoder with Explicit Points Solo for text detection and recognition simultaneously.
Ranked #1 on
Text Spotting
on Total-Text
(using extra training data)
1 code implementation • 1 Nov 2022 • Jianan Zhou, Jianing Zhu, Jingfeng Zhang, Tongliang Liu, Gang Niu, Bo Han, Masashi Sugiyama
Adversarial training (AT) with imperfect supervision is significant but receives limited attention.
1 code implementation • NIPS 2022 • De Cheng, Yixiong Ning, Nannan Wang, Xinbo Gao, Heng Yang, Yuxuan Du, Bo Han, Tongliang Liu
We show that the cycle-consistency regularization helps to minimize the volume of the transition matrix T indirectly without exploiting the estimated noisy class posterior, which could further encourage the estimated transition matrix T to converge to its optimal solution.
1 code implementation • 27 Oct 2022 • Qizhou Wang, Feng Liu, Yonggang Zhang, Jing Zhang, Chen Gong, Tongliang Liu, Bo Han
Out-of-distribution (OOD) detection aims to identify OOD data based on representations extracted from well-trained deep models.
Ranked #19 on
Out-of-Distribution Detection
on ImageNet-1k vs Places
no code implementations • 12 Oct 2022 • Yuanyuan Wang, Wei Huang, Mingming Gong, Xi Geng, Tongliang Liu, Kun Zhang, DaCheng Tao
This paper derives a sufficient condition for the identifiability of homogeneous linear ODE systems from a sequence of equally-spaced error-free observations sampled from a single trajectory.
no code implementations • 4 Oct 2022 • Chaojian Yu, Dawei Zhou, Li Shen, Jun Yu, Bo Han, Mingming Gong, Nannan Wang, Tongliang Liu
Firstly, applying a pre-specified perturbation budget on networks of various model capacities will yield divergent degree of robustness disparity between natural and robust accuracies, which deviates from robust network's desideratum.
1 code implementation • 29 Sep 2022 • Chenghao Sun, Yonggang Zhang, Wan Chaoqun, Qizhou Wang, Ya Li, Tongliang Liu, Bo Han, Xinmei Tian
As it is hard to mitigate the approximation error with few available samples, we propose Error TransFormer (ETF) for lightweight attacks.
no code implementations • 30 Aug 2022 • Xinbiao Wang, Junyu Liu, Tongliang Liu, Yong Luo, Yuxuan Du, DaCheng Tao
To fill this knowledge gap, here we propose the effective quantum neural tangent kernel (EQNTK) and connect this concept with over-parameterization theory to quantify the convergence of QNNs towards the global optima.
1 code implementation • 25 Jul 2022 • Dawei Zhou, Nannan Wang, Xinbo Gao, Bo Han, Xiaoyu Wang, Yibing Zhan, Tongliang Liu
To alleviate this negative effect, in this paper, we investigate the dependence between outputs of the target model and input adversarial samples from the perspective of information theory, and propose an adversarial defense method.
1 code implementation • 7 Jul 2022 • Zhuo Huang, Xiaobo Xia, Li Shen, Bo Han, Mingming Gong, Chen Gong, Tongliang Liu
Machine learning models are vulnerable to Out-Of-Distribution (OOD) examples, and such a problem has drawn much attention.
1 code implementation • 17 Jun 2022 • Chaojian Yu, Bo Han, Li Shen, Jun Yu, Chen Gong, Mingming Gong, Tongliang Liu
Here, we explore the causes of robust overfitting by comparing the data distribution of \emph{non-overfit} (weak adversary) and \emph{overfitted} (strong adversary) adversarial training, and observe that the distribution of the adversarial data generated by weak adversary mainly contain small-loss data.
no code implementations • 16 Jun 2022 • Lianyang Ma, Yu Yao, Tao Liang, Tongliang Liu
On the whole, the "multi-scale" mechanism is capable of exploiting the different levels of semantic information of each modality which are used for fine-grained crossmodal interactions.
2 code implementations • 11 Jun 2022 • Xiong Peng, Feng Liu, Jingfen Zhang, Long Lan, Junjie Ye, Tongliang Liu, Bo Han
To defend against MI attacks, previous work utilizes a unilateral dependency optimization strategy, i. e., minimizing the dependency between inputs (i. e., features) and outputs (i. e., labels) during training the classifier.
no code implementations • 7 Jun 2022 • Jinkai Tian, Xiaoyu Sun, Yuxuan Du, Shanshan Zhao, Qing Liu, Kaining Zhang, Wei Yi, Wanrong Huang, Chaoyue Wang, Xingyao Wu, Min-Hsiu Hsieh, Tongliang Liu, Wenjing Yang, DaCheng Tao
Due to the intrinsic probabilistic nature of quantum mechanics, it is reasonable to postulate that quantum generative learning models (QGLMs) may surpass their classical counterparts.
no code implementations • CVPR 2022 • De Cheng, Tongliang Liu, Yixiong Ning, Nannan Wang, Bo Han, Gang Niu, Xinbo Gao, Masashi Sugiyama
In label-noise learning, estimating the transition matrix has attracted more and more attention as the matrix plays an important role in building statistically consistent classifiers.
no code implementations • 4 Jun 2022 • Yingbin Bai, Erkun Yang, Zhaoqing Wang, Yuxuan Du, Bo Han, Cheng Deng, Dadong Wang, Tongliang Liu
With the training going on, the model begins to overfit noisy pairs.
1 code implementation • 30 May 2022 • Chaojian Yu, Bo Han, Mingming Gong, Li Shen, Shiming Ge, Bo Du, Tongliang Liu
Based on these observations, we propose a robust perturbation strategy to constrain the extent of weight perturbation.
1 code implementation • 27 May 2022 • Erdun Gao, Ignavier Ng, Mingming Gong, Li Shen, Wei Huang, Tongliang Liu, Kun Zhang, Howard Bondell
In this paper, we develop a general method, which we call MissDAG, to perform causal discovery from data with incomplete observations.
no code implementations • 27 May 2022 • Aoqi Zuo, Susan Wei, Tongliang Liu, Bo Han, Kun Zhang, Mingming Gong
Interestingly, we find that counterfactual fairness can be achieved as if the true causal graph were fully known, when specific background knowledge is provided: the sensitive attributes do not have ancestors in the causal graph.
no code implementations • 18 May 2022 • Xiaobo Xia, Wenhao Yang, Jie Ren, Yewen Li, Yibing Zhan, Bo Han, Tongliang Liu
Second, the constraints for diversity are designed to be task-agnostic, which causes the constraints to not work well.
1 code implementation • 15 Apr 2022 • Chuang Liu, Yibing Zhan, Jia Wu, Chang Li, Bo Du, Wenbin Hu, Tongliang Liu, DaCheng Tao
Graph neural networks have emerged as a leading architecture for many graph-level tasks, such as graph classification and graph generation.
1 code implementation • CVPR 2022 • Xiaoqing Guo, Jie Liu, Tongliang Liu, Yixuan Yuan
By exploiting computational geometry analysis and properties of segmentation, we design three complementary regularizers, i. e. volume regularization, anchor guidance, convex guarantee, to approximate the true SimT.
2 code implementations • 28 Mar 2022 • Xiang An, Jiankang Deng, Jia Guo, Ziyong Feng, Xuhan Zhu, Jing Yang, Tongliang Liu
In each iteration, positive class centers and a random subset of negative class centers are selected to compute the margin-based softmax loss.
Ranked #1 on
Face Recognition
on MFR
1 code implementation • 8 Mar 2022 • Shikun Li, Tongliang Liu, Jiyong Tan, Dan Zeng, Shiming Ge
This raises the following important question: how can we effectively use a small amount of trusted data to facilitate robust classifier learning from multiple annotators?
1 code implementation • CVPR 2022 • Shikun Li, Xiaobo Xia, Shiming Ge, Tongliang Liu
In the selection process, by measuring the agreement between learned representations and given labels, we first identify confident examples that are exploited to build confident pairs.
Ranked #11 on
Image Classification
on mini WebVision 1.0
1 code implementation • ICLR 2022 • Yongqiang Chen, Han Yang, Yonggang Zhang, Kaili Ma, Tongliang Liu, Bo Han, James Cheng
Recently Graph Injection Attack (GIA) emerges as a practical attack scenario on Graph Neural Networks (GNNs), where the adversary can merely inject few malicious nodes instead of modifying existing nodes or edges, i. e., Graph Modification Attack (GMA).
3 code implementations • 11 Feb 2022 • Yongqiang Chen, Yonggang Zhang, Yatao Bian, Han Yang, Kaili Ma, Binghui Xie, Tongliang Liu, Bo Han, James Cheng
Despite recent success in using the invariance principle for out-of-distribution (OOD) generalization on Euclidean data (e. g., images), studies on graph data are still limited.
no code implementations • 30 Jan 2022 • Yexiong Lin, Yu Yao, Yuxuan Du, Jun Yu, Bo Han, Mingming Gong, Tongliang Liu
Algorithms which minimize the averaged loss have been widely designed for dealing with noisy labels.
no code implementations • CVPR 2022 • Erkun Yang, Dongren Yao, Tongliang Liu, Cheng Deng
More specifically, we propose a proxy-based contrastive (PC) loss to mitigate the gap between different modalities and train networks for different modalities jointly with small-loss samples that are selected with the PC loss and a mutual quantization loss.
1 code implementation • CVPR 2022 • Xiang An, Jiankang Deng, Jia Guo, Ziyong Feng, Xuhan Zhu, Jing Yang, Tongliang Liu
In each iteration, positive class centers and a random subset of negative class centers are selected to compute the margin-based softmax loss.
1 code implementation • 7 Dec 2021 • Erdun Gao, Junjia Chen, Li Shen, Tongliang Liu, Mingming Gong, Howard Bondell
To date, most directed acyclic graphs (DAGs) structure learning approaches require data to be stored in a central server.
no code implementations • 2 Dec 2021 • Joshua Yee Kim, Tongliang Liu, Kalina Yacef
Conversational analysis systems are trained using noisy human labels and often require heavy preprocessing during multi-modal feature extraction.
1 code implementation • NeurIPS 2021 • Jiahua Dong, Zhen Fang, Anjin Liu, Gan Sun, Tongliang Liu
To address these challenges, we develop a novel Confident-Anchor-induced multi-source-free Domain Adaptation (CAiDA) model, which is a pioneer exploration of knowledge adaptation from multiple source domains to the unlabeled target domain without any source data, but with only pre-trained source models.
1 code implementation • CVPR 2022 • Zhaoqing Wang, Yu Lu, Qiang Li, Xunqiang Tao, Yandong Guo, Mingming Gong, Tongliang Liu
In addition, we present text-to-pixel contrastive learning to explicitly enforce the text feature similar to the related pixel-level features and dissimilar to the irrelevances.
Contrastive Learning
Generalized Referring Expression Segmentation
+3
no code implementations • 19 Nov 2021 • Xin Jin, Tianyu He, Xu Shen, Tongliang Liu, Xinchao Wang, Jianqiang Huang, Zhibo Chen, Xian-Sheng Hua
Unsupervised Person Re-identification (U-ReID) with pseudo labeling recently reaches a competitive performance compared to fully-supervised ReID methods based on modern clustering algorithms.
2 code implementations • ICLR 2022 • Jiaheng Wei, Zhaowei Zhu, Hao Cheng, Tongliang Liu, Gang Niu, Yang Liu
These observations require us to rethink the treatment of noisy labels, and we hope the availability of these two datasets would facilitate the development and evaluation of future learning with noisy label solutions.
no code implementations • 22 Oct 2021 • Wanchuang Zhu, Benjamin Zi Hao Zhao, Simon Luo, Tongliang Liu, Ke Deng
Although we know that the benign gradients and Byzantine attacked gradients are distributed differently, to detect the malicious gradients is challenging due to (1) the gradient is high-dimensional and each dimension has its unique distribution and (2) the benign gradients and the attacked gradients are always mixed (two-sample test methods cannot apply directly).
no code implementations • 29 Sep 2021 • Jiaheng Wei, Hangyu Liu, Tongliang Liu, Gang Niu, Yang Liu
It was shown that LS serves as a regularizer for training data with hard labels and therefore improves the generalization of the model.
Ranked #14 on
Learning with noisy labels
on CIFAR-10N-Worst
no code implementations • 29 Sep 2021 • Jianing Zhu, Jiangchao Yao, Tongliang Liu, Kunyang Jia, Jingren Zhou, Bo Han, Hongxia Yang
Federated Adversarial Training (FAT) helps us address the data privacy and governance issues, meanwhile maintains the model robustness to the adversarial attack.
no code implementations • 29 Sep 2021 • Xin Jin, Tianyu He, Xu Shen, Songhua Wu, Tongliang Liu, Xinchao Wang, Jianqiang Huang, Zhibo Chen, Xian-Sheng Hua
In this paper, we propose an embarrassing simple yet highly effective adversarial domain adaptation (ADA) method for effectively training models for alignment.
no code implementations • 29 Sep 2021 • Xiaobo Xia, Bo Han, Yibing Zhan, Jun Yu, Mingming Gong, Chen Gong, Tongliang Liu
The sample selection approach is popular in learning with noisy labels, which tends to select potentially clean data out of noisy data for robust training.
no code implementations • 29 Sep 2021 • Yu Yao, Xuefeng Li, Tongliang Liu, Alan Blair, Mingming Gong, Bo Han, Gang Niu, Masashi Sugiyama
Existing methods for learning with noisy labels can be generally divided into two categories: (1) sample selection and label correction based on the memorization effect of neural networks; (2) loss correction with the transition matrix.
3 code implementations • ICLR 2022 • Fei Zhang, Lei Feng, Bo Han, Tongliang Liu, Gang Niu, Tao Qin, Masashi Sugiyama
As the first contribution, we empirically show that the class activation map (CAM), a simple technique for discriminating the learning patterns of each class in images, is surprisingly better at making accurate predictions than the model itself on selecting the true label from candidate labels.
no code implementations • 29 Sep 2021 • Dawei Zhou, Nannan Wang, Bo Han, Tongliang Liu
Deep neural networks have been demonstrated to be vulnerable to adversarial noise, promoting the development of defense against adversarial attacks.
no code implementations • 29 Sep 2021 • Xuefeng Du, Tian Bian, Yu Rong, Bo Han, Tongliang Liu, Tingyang Xu, Wenbing Huang, Junzhou Huang
Semi-supervised node classification on graphs is a fundamental problem in graph mining that uses a small set of labeled nodes and many unlabeled nodes for training, so that its performance is quite sensitive to the quality of the node labels.
1 code implementation • 21 Sep 2021 • Dawei Zhou, Nannan Wang, Bo Han, Tongliang Liu
Deep neural networks have been demonstrated to be vulnerable to adversarial noise, promoting the development of defense against adversarial attacks.
2 code implementations • NeurIPS 2021 • Yu Yao, Tongliang Liu, Mingming Gong, Bo Han, Gang Niu, Kun Zhang
In particular, we show that properly modeling the instances will contribute to the identifiability of the label noise transition matrix and thus lead to a better classifier.
no code implementations • CVPR 2022 • Zhaoqing Wang, Qiang Li, Guoxin Zhang, Pengfei Wan, Wen Zheng, Nannan Wang, Mingming Gong, Tongliang Liu
By considering the spatial correspondence, dense self-supervised representation learning has achieved superior performance on various dense prediction tasks.
no code implementations • 10 Jul 2021 • Xiaobo Xia, Shuo Shan, Mingming Gong, Nannan Wang, Fei Gao, Haikun Wei, Tongliang Liu
Estimating the kernel mean in a reproducing kernel Hilbert space is a critical component in many kernel learning algorithms.
1 code implementation • CVPR 2021 • Zhen Huang, Xu Shen, Jun Xing, Tongliang Liu, Xinmei Tian, Houqiang Li, Bing Deng, Jianqiang Huang, Xian-Sheng Hua
The inheritance part is learned with a similarity loss to transfer the existing learned knowledge from the teacher model to the student model, while the exploration part is encouraged to learn representations different from the inherited ones with a dis-similarity loss.
1 code implementation • NeurIPS 2021 • Yingbin Bai, Erkun Yang, Bo Han, Yanhua Yang, Jiatong Li, Yinian Mao, Gang Niu, Tongliang Liu
Instead of the early stopping, which trains a whole DNN all at once, we initially train former DNN layers by optimizing the DNN with a relatively large number of epochs.
Ranked #8 on
Learning with noisy labels
on CIFAR-10N-Aggregate
1 code implementation • NeurIPS 2021 • Qizhou Wang, Feng Liu, Bo Han, Tongliang Liu, Chen Gong, Gang Niu, Mingyuan Zhou, Masashi Sugiyama
Reweighting adversarial data during training has been recently shown to improve adversarial robustness, where data closer to the current decision boundaries are regarded as more critical and given larger weights.
1 code implementation • 14 Jun 2021 • Xuefeng Du, Tian Bian, Yu Rong, Bo Han, Tongliang Liu, Tingyang Xu, Wenbing Huang, Yixuan Li, Junzhou Huang
This paper bridges the gap by proposing a pairwise framework for noisy node classification on graphs, which relies on the PI as a primary learning proxy in addition to the pointwise learning from the noisy node class labels.
1 code implementation • 11 Jun 2021 • Chenhong Zhou, Feng Liu, Chen Gong, Rongfei Zeng, Tongliang Liu, William K. Cheung, Bo Han
However, in an open world, the unlabeled test images probably contain unknown categories and have different distributions from the labeled images.
1 code implementation • NeurIPS 2021 • Haoang Chi, Feng Liu, Wenjing Yang, Long Lan, Tongliang Liu, Bo Han, William K. Cheung, James T. Kwok
To this end, we propose a target orientated hypothesis adaptation network (TOHAN) to solve the FHA problem, where we generate highly-compatible unlabeled data (i. e., an intermediate domain) to help train a target-domain classifier.
1 code implementation • ICLR 2022 • Yonggang Zhang, Mingming Gong, Tongliang Liu, Gang Niu, Xinmei Tian, Bo Han, Bernhard Schölkopf, Kun Zhang
The adversarial vulnerability of deep neural networks has attracted significant attention in machine learning.
no code implementations • 10 Jun 2021 • Dawei Zhou, Nannan Wang, Xinbo Gao, Bo Han, Jun Yu, Xiaoyu Wang, Tongliang Liu
However, pre-processing methods may suffer from the robustness degradation effect, in which the defense reduces rather than improving the adversarial robustness of a target model in a white-box setting.
no code implementations • 9 Jun 2021 • Dawei Zhou, Tongliang Liu, Bo Han, Nannan Wang, Chunlei Peng, Xinbo Gao
However, given the continuously evolving attacks, models trained on seen types of adversarial examples generally cannot generalize well to unseen types of adversarial examples.
2 code implementations • ICLR 2022 • Jianing Zhu, Jiangchao Yao, Bo Han, Jingfeng Zhang, Tongliang Liu, Gang Niu, Jingren Zhou, Jianliang Xu, Hongxia Yang
However, when considering adversarial robustness, teachers may become unreliable and adversarial distillation may not work: teachers are pretrained on their own adversarial data, and it is too demanding to require that teachers are also good at every adversarial data queried by students.
1 code implementation • 8 Jun 2021 • Jiaheng Wei, Hangyu Liu, Tongliang Liu, Gang Niu, Masashi Sugiyama, Yang Liu
We provide understandings for the properties of LS and NLS when learning with noisy labels.
Ranked #8 on
Learning with noisy labels
on CIFAR-10N-Random3
no code implementations • NeurIPS 2021 • Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Jun Yu, Gang Niu, Masashi Sugiyama
In this way, we also give large-loss but less selected data a try; then, we can better distinguish between the cases (a) and (b) by seeing if the losses effectively decrease with the uncertainty after the try.
Ranked #26 on
Image Classification
on mini WebVision 1.0
no code implementations • 1 Jun 2021 • Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Jun Yu, Gang Niu, Masashi Sugiyama
Lots of approaches, e. g., loss correction and label correction, cannot handle such open-set noisy labels well, since they need training data and test data to share the same label space, which does not hold for learning with open-set noisy labels.
1 code implementation • 31 May 2021 • Jingfeng Zhang, Xilie Xu, Bo Han, Tongliang Liu, Gang Niu, Lizhen Cui, Masashi Sugiyama
First, we thoroughly investigate noisy labels (NLs) injection into AT's inner maximization and outer minimization, respectively and obtain the observations on when NL injection benefits AT.
no code implementations • 27 May 2021 • Shuo Yang, Erkun Yang, Bo Han, Yang Liu, Min Xu, Gang Niu, Tongliang Liu
Motivated by that classifiers mostly output Bayes optimal labels for prediction, in this paper, we study to directly model the transition from Bayes optimal labels to noisy labels (i. e., Bayes-label transition matrix (BLTM)) and learn a classifier to predict Bayes optimal labels.
no code implementations • 22 Apr 2021 • Lie Ju, Xin Wang, Lin Wang, Tongliang Liu, Xin Zhao, Tom Drummond, Dwarikanath Mahapatra, ZongYuan Ge
For example, there are estimated more than 40 different kinds of retinal diseases with variable morbidity, however with more than 30+ conditions are very rare from the global patient cohorts, which results in a typical long-tailed learning problem for deep learning-based screening models.
no code implementations • ICCV 2021 • Dawei Zhou, Nannan Wang, Chunlei Peng, Xinbo Gao, Xiaoyu Wang, Jun Yu, Tongliang Liu
Then, we train a denoising model to minimize the distances between the adversarial examples and the natural examples in the class activation feature space.
no code implementations • 17 Mar 2021 • Qizhou Wang, Jiangchao Yao, Chen Gong, Tongliang Liu, Mingming Gong, Hongxia Yang, Bo Han
Most of the previous approaches in this area focus on the pairwise relation (casual or correlational relationship) with noise, such as learning with noisy labels.
no code implementations • 1 Mar 2021 • Shijun Cai, Seok-Hee Hong, Jialiang Shen, Tongliang Liu
In this paper, we present the first machine learning approach for predicting human preference for graph layouts.
no code implementations • 28 Feb 2021 • Lie Ju, Xin Wang, Lin Wang, Dwarikanath Mahapatra, Xin Zhao, Mehrtash Harandi, Tom Drummond, Tongliang Liu, ZongYuan Ge
In this paper, we systematically discuss and define the two common types of label noise in medical images - disagreement label noise from inconsistency expert opinions and single-target label noise from wrong diagnosis record.
1 code implementation • ICLR 2022 • Haoang Chi, Feng Liu, Bo Han, Wenjing Yang, Long Lan, Tongliang Liu, Gang Niu, Mingyuan Zhou, Masashi Sugiyama
In this paper, we demystify assumptions behind NCD and find that high-level semantic features should be shared among the seen and unseen classes.
no code implementations • 6 Feb 2021 • Jianing Zhu, Jingfeng Zhang, Bo Han, Tongliang Liu, Gang Niu, Hongxia Yang, Mohan Kankanhalli, Masashi Sugiyama
A recent adversarial training (AT) study showed that the number of projected gradient descent (PGD) steps to successfully attack a point (i. e., find an adversarial example in its proximity) is an effective measure of the robustness of this point.
1 code implementation • 4 Feb 2021 • Xuefeng Li, Tongliang Liu, Bo Han, Gang Niu, Masashi Sugiyama
In label-noise learning, the transition matrix plays a key role in building statistically consistent classifiers.
Ranked #14 on
Learning with noisy labels
on CIFAR-100N
1 code implementation • 3 Feb 2021 • Xuefeng Du, Jingfeng Zhang, Bo Han, Tongliang Liu, Yu Rong, Gang Niu, Junzhou Huang, Masashi Sugiyama
In adversarial training (AT), the main focus has been the objective and optimizer while the model has been less studied, so that the models being used are still those classic ones in standard training (ST).
1 code implementation • 14 Jan 2021 • Qizhou Wang, Bo Han, Tongliang Liu, Gang Niu, Jian Yang, Chen Gong
The drastic increase of data quantity often brings the severe decrease of data quality, such as incorrect label annotations, which poses a great challenge for robustly training Deep Neural Networks (DNNs).
no code implementations • 1 Jan 2021 • Chenwei Ding, Biwei Huang, Mingming Gong, Kun Zhang, Tongliang Liu, DaCheng Tao
Most algorithms in causal discovery consider a single domain with a fixed distribution.
no code implementations • 1 Jan 2021 • Dawei Zhou, Tongliang Liu, Bo Han, Nannan Wang, Xinbo Gao
Motivated by this observation, we propose a defense framework ADD-Defense, which extracts the invariant information called \textit{perturbation-invariant representation} (PIR) to defend against widespread adversarial examples.
no code implementations • ICLR 2021 • Xiaobo Xia, Tongliang Liu, Bo Han, Chen Gong, Nannan Wang, ZongYuan Ge, Yi Chang
The \textit{early stopping} method therefore can be exploited for learning with noisy labels.
Ranked #32 on
Image Classification
on mini WebVision 1.0
(ImageNet Top-1 Accuracy metric)
no code implementations • ICCV 2021 • Yingbin Bai, Tongliang Liu
To extract hard confident examples that contain non-simple patterns and are entangled with the inaccurately labeled examples, we borrow the idea of momentum from physics.
no code implementations • 1 Jan 2021 • Bingbing Song, wei he, Renyang Liu, Shui Yu, Ruxin Wang, Mingming Gong, Tongliang Liu, Wei Zhou
Several state-of-the-arts start from improving the inter-class separability of training samples by modifying loss functions, where we argue that the adversarial samples are ignored and thus limited robustness to adversarial attacks is resulted.
1 code implementation • CVPR 2021 • Zhaowei Zhu, Tongliang Liu, Yang Liu
We first provide evidences that the heterogeneous instance-dependent label noise is effectively down-weighting the examples with higher noise rates in a non-uniform way and thus causes imbalances, rendering the strategy of directly applying methods for class-dependent label noise questionable.
no code implementations • 10 Dec 2020 • Guoqing Bao, Huai Chen, Tongliang Liu, Guanzhong Gong, Yong Yin, Lisheng Wang, Xiuying Wang
In this paper, we present an end-to-end multitask learning (MTL) framework (COVID-MTL) that is capable of automated and simultaneous detection (against both radiology and NAT) and severity assessment of COVID-19.
no code implementations • 2 Dec 2020 • Xiaobo Xia, Tongliang Liu, Bo Han, Nannan Wang, Jiankang Deng, Jiatong Li, Yinian Mao
The traditional transition matrix is limited to model closed-set label noise, where noisy training data has true class labels within the noisy label set.
1 code implementation • NeurIPS 2020 • Shanshan Zhao, Mingming Gong, Tongliang Liu, Huan Fu, DaCheng Tao
To arrive at this, some methods introduce a domain discriminator through adversarial learning to match the feature distributions in multiple source domains.
Ranked #40 on
Domain Generalization
on PACS
1 code implementation • 9 Nov 2020 • Bo Han, Quanming Yao, Tongliang Liu, Gang Niu, Ivor W. Tsang, James T. Kwok, Masashi Sugiyama
Classical machine learning implicitly assumes that labels of the training data are sampled from a clean distribution, which can be too restrictive for real-world scenarios.
2 code implementations • 22 Oct 2020 • Ruize Gao, Feng Liu, Jingfeng Zhang, Bo Han, Tongliang Liu, Gang Niu, Masashi Sugiyama
However, it has been shown that the MMD test is unaware of adversarial attacks -- the MMD test failed to detect the discrepancy between natural and adversarial data.
2 code implementations • 13 Oct 2020 • He-Liang Huang, Yuxuan Du, Ming Gong, YouWei Zhao, Yulin Wu, Chaoyue Wang, Shaowei Li, Futian Liang, Jin Lin, Yu Xu, Rui Yang, Tongliang Liu, Min-Hsiu Hsieh, Hui Deng, Hao Rong, Cheng-Zhi Peng, Chao-Yang Lu, Yu-Ao Chen, DaCheng Tao, Xiaobo Zhu, Jian-Wei Pan
For the first time, we experimentally achieve the learning and generation of real-world hand-written digit images on a superconducting quantum processor.
no code implementations • 28 Sep 2020 • Songhua Wu, Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Nannan Wang, Haifeng Liu, Gang Niu
It is worthwhile to perform the transformation: We prove that the noise rate for the noisy similarity labels is lower than that of the noisy class labels, because similarity labels themselves are robust to noise.
no code implementations • 23 Jul 2020 • Yuxuan Du, Min-Hsiu Hsieh, Tongliang Liu, Shan You, DaCheng Tao
The eligibility of various advanced quantum algorithms will be questioned if they can not guarantee privacy.
no code implementations • 3 Jul 2020 • Xinpeng Ding, Nannan Wang, Xinbo Gao, Jie Li, Xiaoyu Wang, Tongliang Liu
Specifically, we devise a partial segment loss regarded as a loss sampling to learn integral action parts from labeled segments.
Weakly-supervised Temporal Action Localization
Weakly Supervised Temporal Action Localization
1 code implementation • NeurIPS 2020 • Yu Yao, Tongliang Liu, Bo Han, Mingming Gong, Jiankang Deng, Gang Niu, Masashi Sugiyama
By this intermediate class, the original transition matrix can then be factorized into the product of two easy-to-estimate transition matrices.
1 code implementation • NeurIPS 2020 • Xiaobo Xia, Tongliang Liu, Bo Han, Nannan Wang, Mingming Gong, Haifeng Liu, Gang Niu, DaCheng Tao, Masashi Sugiyama
Learning with the \textit{instance-dependent} label noise is challenging, because it is hard to model such real-world noise.
no code implementations • 14 Jun 2020 • Songhua Wu, Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Nannan Wang, Haifeng Liu, Gang Niu
To give an affirmative answer, in this paper, we propose a framework called Class2Simi: it transforms data points with noisy class labels to data pairs with noisy similarity labels, where a similarity label denotes whether a pair shares the class label or not.
no code implementations • 7 Apr 2020 • Maoying Qiao, Tongliang Liu, Jun Yu, Wei Bian, DaCheng Tao
To alleviate this problem, in this paper, a repulsiveness-encouraging prior is introduced among mixing components and a diversified EPCA mixture (DEPCAM) model is developed in the Bayesian framework.
no code implementations • 20 Mar 2020 • Yuxuan Du, Min-Hsiu Hsieh, Tongliang Liu, DaCheng Tao, Nana Liu
This robustness property is intimately connected with an important security concept called differential privacy which can be extended to quantum differential privacy.
no code implementations • 16 Feb 2020 • Songhua Wu, Xiaobo Xia, Tongliang Liu, Bo Han, Mingming Gong, Nannan Wang, Haifeng Liu, Gang Niu
We further estimate the transition matrix from only noisy data and build a novel learning system to learn a classifier which can assign noise-free class labels for instances.
no code implementations • ICLR 2022 • Yu Yao, Tongliang Liu, Bo Han, Mingming Gong, Gang Niu, Masashi Sugiyama, DaCheng Tao
Hitherto, the distributional-assumption-free CPE methods rely on a critical assumption that the support of the positive data distribution cannot be contained in the support of the negative data distribution.
no code implementations • 11 Jan 2020 • Antonin Berthon, Bo Han, Gang Niu, Tongliang Liu, Masashi Sugiyama
We find with the help of confidence scores, the transition distribution of each instance can be approximately estimated.
no code implementations • 15 Dec 2019 • Zhe Chen, Wanli Ouyang, Tongliang Liu, DaCheng Tao
Alternatively, to access much more natural-looking pedestrians, we propose to augment pedestrian detection datasets by transforming real pedestrians from the same dataset into different shapes.
no code implementations • NeurIPS 2019 • Fengxiang He, Tongliang Liu, DaCheng Tao
Specifically, we prove a PAC-Bayes generalization bound for neural networks trained by SGD, which has a positive correlation with the ratio of batch size to learning rate.
1 code implementation • 28 Nov 2019 • Xu Shen, Xinmei Tian, Tongliang Liu, Fang Xu, DaCheng Tao
On the one hand, continuous dropout is considerably closer to the activation characteristics of neurons in the human brain than traditional binary dropout.
no code implementations • 20 Nov 2019 • Jingfeng Zhang, Bo Han, Gang Niu, Tongliang Liu, Masashi Sugiyama
Deep neural networks (DNNs) are incredibly brittle due to adversarial examples.
1 code implementation • 31 Jul 2019 • Yihang Lou, Ling-Yu Duan, Yong Luo, Ziqian Chen, Tongliang Liu, Shiqi Wang, Wen Gao
The digital retina in smart cities is to select what the City Eye tells the City Brain, and convert the acquired visual data from front-end visual sensors to features in an intelligent sensing manner.
no code implementations • 16 Jul 2019 • Yuxuan Du, Min-Hsiu Hsieh, Tongliang Liu, DaCheng Tao
In this paper, we propose a sublinear classical algorithm to tackle general minimum conical hull problems when the input has stored in a sample-based low-overhead data structure.
no code implementations • 2 Jun 2019 • Naiyang Guan, Tongliang Liu, Yangmuzi Zhang, DaCheng Tao, Larry S. Davis
Non-negative matrix factorization (NMF) minimizes the Euclidean distance between the data matrix and its low rank approximation, and it fails when applied to corrupted data because the loss function is sensitive to outliers.
1 code implementation • NeurIPS 2019 • Xiaobo Xia, Tongliang Liu, Nannan Wang, Bo Han, Chen Gong, Gang Niu, Masashi Sugiyama
Existing theories have shown that the transition matrix can be learned by exploiting \textit{anchor points} (i. e., data points that belong to a specific class almost surely).
Ranked #17 on
Learning with noisy labels
on CIFAR-10N-Random3
1 code implementation • 15 May 2019 • Kui Jia, Shuai Li, Yuxin Wen, Tongliang Liu, DaCheng Tao
To this end, we first prove that DNNs are of local isometry on data distributions of practical interest; by using a new covering of the sample space and introducing the local isometry property of DNNs into generalization analysis, we establish a new generalization error bound that is both scale- and range-sensitive to singular value spectrum of each of networks' weight matrices.