no code implementations • 29 Dec 2024 • Zhifang Zhang, Shuo He, Bingquan Shen, Lei Feng
Multimodal contrastive learning models (e. g., CLIP) can learn high-quality representations from large-scale image-text datasets, yet they exhibit significant vulnerabilities to backdoor attacks, raising serious safety concerns.
1 code implementation • 14 Dec 2024 • Zongqian Wu, Baoduo Xu, Ruochen Cui, Mengmeng Zhan, Xiaofeng Zhu, Lei Feng
Chain-of-thought (CoT) reasoning has emerged as an effective approach for activating latent capabilities in large language models (LLMs).
no code implementations • 13 Nov 2024 • Penghui Yang, Chen-Chen Zong, Sheng-Jun Huang, Lei Feng, Bo An
Drawing from the theoretical analysis, we propose a novel method called dual-head knowledge distillation, which partitions the linear classifier into two classification heads responsible for different losses, thereby preserving the beneficial effects of both losses on the backbone while eliminating adverse influences on the classification head.
no code implementations • 4 Nov 2024 • Jincheng Huang, Yujie Mo, Xiaoshuang Shi, Lei Feng, Xiaofeng Zhu
In the first stage, ELU-GCN conducts graph learning to learn a new graph structure (\ie ELU-graph), which enables GCNs to effectively utilize label information.
1 code implementation • 31 Oct 2024 • Chengyi Cai, Zesheng Ye, Lei Feng, Jianzhong Qi, Feng Liu
When adapting the output interface, label mapping methods transform the pretrained labels to downstream labels by establishing a gradient-free one-to-one correspondence between the two sets of labels.
no code implementations • 29 Oct 2024 • Xudong Wang, Hongyang Du, Dusit Niyato, Lijie Zhou, Lei Feng, Zhixiang Yang, Fanqin Zhou, Wenjing Li
Then, we propose a framework based on generative diffusion models (GDMs) that iteratively denoises toward reward maximization to generate a matching strategy that meets specific requirements.
no code implementations • 10 Oct 2024 • Ao Ke, Wenlong Chen, Chuanwen Feng, Yukun Cao, Xike Xie, S. Kevin Zhou, Lei Feng
In this paper, inspired by the inherent distribution shift between ID and OOD data, we propose a novel method that leverages optimal transport to measure the distribution discrepancy between test inputs and ID prototypes.
no code implementations • 18 Sep 2024 • Lei Feng, Jingxing Liao, Jingna Yang
Integrating artificial intelligence (AI) techniques such as machine learning and deep learning into freeform optics design has significantly enhanced design efficiency, expanded the design space, and led to innovative solutions.
no code implementations • 21 Aug 2024 • Minghao Liu, Zonglin Di, Jiaheng Wei, Zhongruo Wang, Hengxiang Zhang, Ruixuan Xiao, Haoyu Wang, Jinlong Pang, Hao Chen, Ankit Shah, Hongxin Wei, Xinlei He, Zhaowei Zhao, Haobo Wang, Lei Feng, Jindong Wang, James Davis, Yang Liu
Furthermore, we design three benchmark datasets focused on label noise detection, label noise learning, and class-imbalanced learning.
1 code implementation • 21 Jul 2024 • Beibei Li, Yiyuan Zheng, Beihong Jin, Tao Xiang, Haobo Wang, Lei Feng
Specifically, the disambiguation network is trained with self-training PLL task to learn label confidence, while the auxiliary network is trained in a supervised learning paradigm to learn from the noisy pairwise similarity labels that are constructed according to the learned label confidence.
1 code implementation • 20 Jun 2024 • Chuqiao Zong, Chaojie Wang, Molei Qin, Lei Feng, Xinrun Wang, Bo An
To tackle these problems, we propose a novel Memory Augmented Context-aware Reinforcement learning method On HFT, \emph{a. k. a.}
1 code implementation • 15 Jun 2024 • Jiahan Zhang, Qi Wei, Feng Liu, Lei Feng
To alleviate this issue, we propose a Candidate Pseudolabel Learning method, termed CPL, to fine-tune VLMs with suitable candidate pseudolabels of unlabeled data in downstream tasks.
no code implementations • 11 Jun 2024 • Zhixiang Yang, Hongyang Du, Dusit Niyato, Xudong Wang, Yu Zhou, Lei Feng, Fanqin Zhou, Wenjing Li, Xuesong Qiu
With the rapid proliferation of mobile devices and data, next-generation wireless communication systems face stringent requirements for ultra-low latency, ultra-high reliability, and massive connectivity.
1 code implementation • 5 Jun 2024 • Jinhao Li, Haopeng Li, Sarah Erfani, Lei Feng, James Bailey, Feng Liu
The local visual areas are then cross-aligned with the finer descriptions by creating a similarity matrix using the pre-trained VLM.
1 code implementation • 5 Jun 2024 • Chengyi Cai, Zesheng Ye, Lei Feng, Jianzhong Qi, Feng Liu
Since we generate different masks for individual samples, SMM is theoretically shown to reduce approximation error for the target tasks compared with existing state-of-the-art VR methods.
no code implementations • 24 May 2024 • Yuwei Niu, Shuo He, Qi Wei, Zongyu Wu, Feng Liu, Lei Feng
In this paper, we provide the first attempt at a computationally efficient backdoor detection method to defend against backdoored CLIP in the inference stage.
no code implementations • 23 May 2024 • Yuyan Zhou, Ye Li, Lei Feng, Sheng-Jun Huang
Recent studies showed that the generalization of neural networks is correlated with the sharpness of the loss landscape, and flat minima suggests a better generalization ability than sharp minima.
1 code implementation • 8 Feb 2024 • Zhenlong Liu, Lei Feng, Huiping Zhuang, Xiaofeng Cao, Hongxin Wei
In this work, we propose a novel method -- Convex-Concave Loss, which enables a high variance of training loss distribution by gradient descent.
1 code implementation • 6 Feb 2024 • Huajun Xi, Jianguo Huang, Kangdao Liu, Lei Feng, Hongxin Wei
To address this issue, we propose Conformal Temperature Scaling (ConfTS), a variant of temperature scaling with a novel loss function designed to enhance the efficiency of prediction sets.
1 code implementation • 2 Feb 2024 • Hao Chen, Jindong Wang, Lei Feng, Xiang Li, Yidong Wang, Xing Xie, Masashi Sugiyama, Rita Singh, Bhiksha Raj
Weakly supervised learning generally faces challenges in applicability to various scenarios with diverse weak supervision and in scalability due to the complexity of existing algorithms, thereby hindering the practical deployment.
1 code implementation • 24 Jan 2024 • Qi Wei, Lei Feng, Haobo Wang, Bo An
To address this limitation, we propose a noIse-Tolerant Expert Model (ITEM) for debiased learning in sample selection.
1 code implementation • CVPR 2024 • Lin Long, Haobo Wang, Zhijie Jiang, Lei Feng, Chang Yao, Gang Chen, Junbo Zhao
To cope with this problem we propose a novel PU learning framework namely Latent Group-Aware Meta Disambiguation (LaGAM) which incorporates a hierarchical contrastive learning module to extract the underlying grouping semantics within PU data and produce compact representations.
1 code implementation • CVPR 2024 • Ruixuan Xiao, Lei Feng, Kai Tang, Junbo Zhao, Yixuan Li, Gang Chen, Haobo Wang
Open-world Semi-Supervised Learning aims to classify unlabeled samples utilizing information from labeled data while unlabeled samples are not only from the labeled known categories but also from novel categories previously unseen.
no code implementations • 31 Dec 2023 • Chaojie Wang, Yishi Xu, Zhong Peng, Chenxi Zhang, Bo Chen, Xinrun Wang, Lei Feng, Bo An
Large language models (LLMs) have exhibited remarkable performance on various natural language processing (NLP) tasks, especially for question answering.
no code implementations • ICCV 2023 • Suqin Yuan, Lei Feng, Tongliang Liu
Sample selection is a prevalent method in learning with noisy labels, where small-loss data are typically considered as correctly labeled data.
1 code implementation • ICCV 2023 • Penghui Yang, Ming-Kun Xie, Chen-Chen Zong, Lei Feng, Gang Niu, Masashi Sugiyama, Sheng-Jun Huang
Existing knowledge distillation methods typically work by imparting the knowledge of output logits or intermediate feature maps from the teacher network to the student network, which is very successful in multi-class single-label learning.
no code implementations • 14 Jul 2023 • Fei Zhang, Yunjie Ye, Lei Feng, Zhongwen Rao, Jieming Zhu, Marcus Kalander, Chen Gong, Jianye Hao, Bo Han
In this setting, an oracle annotates the query samples with partial labels, relaxing the oracle from the demanding accurate labeling process.
no code implementations • 2 Jul 2023 • Shuo He, Lei Feng, Guowu Yang
In this paper, we term the examples whose true label is outside the candidate label set OOC (out-of-candidate) examples, and pioneer a new PLL study to learn with OOC examples.
no code implementations • 20 Jun 2023 • Zixi Wei, Lei Feng, Bo Han, Tongliang Liu, Gang Niu, Xiaofeng Zhu, Heng Tao Shen
This motivates the study on classification from aggregate observations (CFAO), where the supervision is provided to groups of instances, instead of individual instances.
no code implementations • 18 Jun 2023 • Xin Cheng, Yuzhou Cao, Ximing Li, Bo An, Lei Feng
Third, we propose a statistically consistent limiting method for RIT to train the model by limiting the predictions to the interval.
1 code implementation • AAAI 2023 • Xin Cheng, Deng-Bao Wang, Lei Feng, Min-Ling Zhang, Bo An
Our proposed methods are theoretically grounded and can be compatible with any models, optimizers, and losses.
1 code implementation • 12 Jun 2023 • Senlin Shu, Shuo He, Haobo Wang, Hongxin Wei, Tao Xiang, Lei Feng
In this paper, we propose a generalized URE that can be equipped with arbitrary loss functions while maintaining the theoretical guarantees, given unlabeled data for LAC.
1 code implementation • CVPR 2024 • Jie Xu, Yazhou Ren, Xiaolong Wang, Lei Feng, Zheng Zhang, Gang Niu, Xiaofeng Zhu
Multi-view clustering (MVC) aims at exploring category structures among multi-view data in self-supervised manners.
no code implementations • CVPR 2024 • Shiyu Tian, Hongxin Wei, Yiqun Wang, Lei Feng
In this paper, we propose a new method called CroSel, which leverages historical predictions from the model to identify true labels for most training examples.
no code implementations • CVPR 2023 • Qi Wei, Lei Feng, Haoliang Sun, Ren Wang, Chenhui Guo, Yilong Yin
To this end, we propose a novel framework called stochastic noise-tolerated supervised contrastive learning (SNSCL) that confronts label noise by encouraging distinguishable representation.
no code implementations • 23 Feb 2023 • Le Xia, Yao Sun, Dusit Niyato, Daquan Feng, Lei Feng, Muhammad Ali Imran
Semantic communication (SemCom), as an emerging paradigm focusing on meaning delivery, has recently been considered a promising solution for the inevitable crisis of scarce communication resources.
no code implementations • ICCV 2023 • Shuo He, Guowu Yang, Lei Feng
In this paper, we start with an empirical study of the dynamics of label disambiguation in both II-PLL and ID-PLL.
no code implementations • 8 Dec 2022 • Hongxin Wei, Huiping Zhuang, Renchunzi Xie, Lei Feng, Gang Niu, Bo An, Yixuan Li
In the presence of noisy labels, designing robust loss functions is critical for securing the generalization performance of deep neural networks.
2 code implementations • Conference 2022 • Yuzhou Cao, Tianchi Cai, Lei Feng, Lihong Gu, Jinjie Gu, Bo An, Gang Niu, Masashi Sugiyama
\emph{Classification with rejection} (CwR) refrains from making a prediction to avoid critical misclassification when encountering test samples that are difficult to classify.
1 code implementation • 21 Sep 2022 • Haobo Wang, Mingxuan Xia, Yixuan Li, YUREN MAO, Lei Feng, Gang Chen, Junbo Zhao
Partial-label learning (PLL) is a peculiar weakly-supervised learning task where the training samples are generally associated with a set of candidate labels instead of single ground truth.
1 code implementation • 21 Jul 2022 • Ruixuan Xiao, Yiwen Dong, Haobo Wang, Lei Feng, Runze Wu, Gang Chen, Junbo Zhao
To overcome the potential side effect of excessive clean set selection procedure, we further devise a novel SSL framework that is able to train balanced and unbiased classifiers on the separated clean and noisy samples.
Ranked #1 on Learning with noisy labels on CIFAR-10N-Worst
3 code implementations • 17 Jun 2022 • Hongxin Wei, Lue Tao, Renchunzi Xie, Lei Feng, Bo An
Deep neural networks usually perform poorly when the training dataset suffers from extreme class imbalance.
2 code implementations • 19 May 2022 • Hongxin Wei, Renchunzi Xie, Hao Cheng, Lei Feng, Bo An, Yixuan Li
Our method is motivated by the analysis that the norm of the logit keeps increasing during training, leading to overconfident output.
1 code implementation • 31 Jan 2022 • Lue Tao, Lei Feng, Hongxin Wei, JinFeng Yi, Sheng-Jun Huang, Songcan Chen
Under this threat, we show that adversarial training using a conventional defense budget $\epsilon$ provably fails to provide test robustness in a simple statistical setting, where the non-robust features of the training data can be reinforced by $\epsilon$-bounded perturbation.
1 code implementation • 22 Jan 2022 • Haobo Wang, Ruixuan Xiao, Yixuan Li, Lei Feng, Gang Niu, Gang Chen, Junbo Zhao
Partial label learning (PLL) is an important problem that allows each training example to be labeled with a coarse candidate set, which well suits many real-world data annotation scenarios with label ambiguity.
3 code implementations • 16 Jan 2022 • Renchunzi Xie, Hongxin Wei, Lei Feng, Bo An
Although there have been a few studies on this problem, most of them only exploit unidirectional relationships from the source domain to the target domain.
no code implementations • NeurIPS 2021 • Deng-Bao Wang, Lei Feng, Min-Ling Zhang
Capturing accurate uncertainty quantification of the prediction from deep neural networks is important in many real-world decision-making applications.
no code implementations • ICLR 2022 • Changchun Li, Ximing Li, Lei Feng, Jihong Ouyang
In this paper, we propose a novel PU learning method, namely Positive and unlabeled learning with Partially Positive Mixup (P3Mix), which simultaneously benefits from data augmentation and supervision correction with a heuristic mixup technique.
1 code implementation • ICLR 2022 • Haobo Wang, Ruixuan Xiao, Sharon Li, Lei Feng, Gang Niu, Gang Chen, Junbo Zhao
Partial label learning (PLL) is an important problem that allows each training example to be labeled with a coarse candidate set, which well suits many real-world data annotation scenarios with label ambiguity.
3 code implementations • ICLR 2022 • Fei Zhang, Lei Feng, Bo Han, Tongliang Liu, Gang Niu, Tao Qin, Masashi Sugiyama
As the first contribution, we empirically show that the class activation map (CAM), a simple technique for discriminating the learning patterns of each class in images, is surprisingly better at making accurate predictions than the model itself on selecting the true label from candidate labels.
no code implementations • 29 Sep 2021 • Hongxin Wei, Lue Tao, Renchunzi Xie, Lei Feng, Bo An
Deep neural networks usually perform poorly when the training dataset suffers from extreme class imbalance.
no code implementations • 16 Jun 2021 • Yuzhou Cao, Lei Feng, Senlin Shu, Yitian Xu, Bo An, Gang Niu, Masashi Sugiyama
We show that without any assumptions on the loss functions, models, and optimizers, we can successfully learn a multi-class classifier from only data of a single class with a rigorous consistency guarantee when confidences (i. e., the class-posterior probabilities for all the classes) are available.
no code implementations • 11 Jun 2021 • Jiaqi Lv, Biao Liu, Lei Feng, Ning Xu, Miao Xu, Bo An, Gang Niu, Xin Geng, Masashi Sugiyama
Partial-label learning (PLL) utilizes instances with PLs, where a PL includes several candidate labels but only one is the true label (TL).
no code implementations • 13 Feb 2021 • Yuzhou Cao, Lei Feng, Yitian Xu, Bo An, Gang Niu, Masashi Sugiyama
Weakly supervised learning has drawn considerable attention recently to reduce the expensive time and labor consumption of labeling massive data.
2 code implementations • NeurIPS 2021 • Lue Tao, Lei Feng, JinFeng Yi, Sheng-Jun Huang, Songcan Chen
Delusive attacks aim to substantially deteriorate the test accuracy of the learning model by slightly perturbing the features of correctly labeled training examples.
no code implementations • ICCV 2021 • Tao Liang, Guosheng Lin, Lei Feng, Yan Zhang, Fengmao Lv
To this end, both the marginal distribution and the elements with high-confidence correlations are aligned over the common space of the query and key vectors which are computed from different modalities.
1 code implementation • 29 Dec 2020 • Lue Tao, Lei Feng, JinFeng Yi, Songcan Chen
In this paper, we unveil the threat of hypocritical examples -- inputs that are originally misclassified yet perturbed by a false friend to force correct predictions.
no code implementations • 9 Dec 2020 • Hongxin Wei, Lei Feng, Rundong Wang, Bo An
Deep neural networks have been shown to easily overfit to biased training data with label noise or class imbalance.
no code implementations • 2 Dec 2020 • Zhuowei Wang, Jing Jiang, Bo Han, Lei Feng, Bo An, Gang Niu, Guodong Long
We also instantiate our framework with different combinations, which set the new state of the art on benchmark-simulated and real-world datasets with noisy labels.
no code implementations • 5 Oct 2020 • Lei Feng, Senlin Shu, Nan Lu, Bo Han, Miao Xu, Gang Niu, Bo An, Masashi Sugiyama
To alleviate the data requirement for training effective binary classifiers in binary classification, many weakly supervised learning settings have been proposed.
no code implementations • 28 Jul 2020 • Zhuoyi Lin, Lei Feng, Xingzhi Guo, Yu Zhang, Rui Yin, Chee Keong Kwoh, Chi Xu
In this paper, we propose a novel representation learning-based model called COMET (COnvolutional diMEnsion inTeraction), which simultaneously models the high-order interaction patterns among historical interactions and embedding dimensions.
no code implementations • 28 Jul 2020 • Zhuoyi Lin, Lei Feng, Rui Yin, Chi Xu, Chee-Keong Kwoh
We argue that recommendation on global and local graphs outperforms that on a single global graph or multiple local graphs.
no code implementations • NeurIPS 2020 • Lei Feng, Jiaqi Lv, Bo Han, Miao Xu, Gang Niu, Xin Geng, Bo An, Masashi Sugiyama
Partial-label learning (PLL) is a multi-class classification problem, where each training example is associated with a set of candidate labels.
no code implementations • 31 Mar 2020 • Fengmao Lv, Jianyang Zhang, Guowu Yang, Lei Feng, YuFeng Yu, Lixin Duan
Zero-Shot Learning (ZSL) learns models for recognizing new classes.
2 code implementations • CVPR 2020 • Hongxin Wei, Lei Feng, Xiangyu Chen, Bo An
The state-of-the-art approaches "Decoupling" and "Co-teaching+" claim that the "disagreement" strategy is crucial for alleviating the problem of learning with noisy labels.
Ranked #10 on Learning with noisy labels on CIFAR-10N-Random3
1 code implementation • ICML 2020 • Jiaqi Lv, Miao Xu, Lei Feng, Gang Niu, Xin Geng, Masashi Sugiyama
Partial-label learning (PLL) is a typical weakly supervised learning problem, where each training instance is equipped with a set of candidate labels among which only one is the true label.
no code implementations • ICML 2020 • Lei Feng, Takuo Kaneko, Bo Han, Gang Niu, Bo An, Masashi Sugiyama
In this paper, we propose a novel problem setting to allow MCLs for each example and two ways for learning with MCLs.
no code implementations • 8 Feb 2019 • Lei Feng, Bo An
We show that optimizing this convex-concave problem is equivalent to solving a set of quadratic programming (QP) problems.
no code implementations • 8 Feb 2019 • Lei Feng, Bo An, Shuo He
It is well-known that exploiting label correlations is crucially important to multi-label learning.