Search Results for author: Lei Feng

Found 69 papers, 31 papers with code

Defending Multimodal Backdoored Models by Repulsive Visual Prompt Tuning

no code implementations29 Dec 2024 Zhifang Zhang, Shuo He, Bingquan Shen, Lei Feng

Multimodal contrastive learning models (e. g., CLIP) can learn high-quality representations from large-scale image-text datasets, yet they exhibit significant vulnerabilities to backdoor attacks, raising serious safety concerns.

backdoor defense Contrastive Learning +1

Rethinking Chain-of-Thought from the Perspective of Self-Training

1 code implementation14 Dec 2024 Zongqian Wu, Baoduo Xu, Ruochen Cui, Mengmeng Zhan, Xiaofeng Zhu, Lei Feng

Chain-of-thought (CoT) reasoning has emerged as an effective approach for activating latent capabilities in large language models (LLMs).

Dual-Head Knowledge Distillation: Enhancing Logits Utilization with an Auxiliary Head

no code implementations13 Nov 2024 Penghui Yang, Chen-Chen Zong, Sheng-Jun Huang, Lei Feng, Bo An

Drawing from the theoretical analysis, we propose a novel method called dual-head knowledge distillation, which partitions the linear classifier into two classification heads responsible for different losses, thereby preserving the beneficial effects of both losses on the backbone while eliminating adverse influences on the classification head.

Attribute Knowledge Distillation

ELU-GCN: Effectively Label-Utilizing Graph Convolutional Network

no code implementations4 Nov 2024 Jincheng Huang, Yujie Mo, Xiaoshuang Shi, Lei Feng, Xiaofeng Zhu

In the first stage, ELU-GCN conducts graph learning to learn a new graph structure (\ie ELU-graph), which enables GCNs to effectively utilize label information.

Contrastive Learning Graph Learning +1

Bayesian-guided Label Mapping for Visual Reprogramming

1 code implementation31 Oct 2024 Chengyi Cai, Zesheng Ye, Lei Feng, Jianzhong Qi, Feng Liu

When adapting the output interface, label mapping methods transform the pretrained labels to downstream labels by establishing a gradient-free one-to-one correspondence between the two sets of labels.

Generative AI Enabled Matching for 6G Multiple Access

no code implementations29 Oct 2024 Xudong Wang, Hongyang Du, Dusit Niyato, Lijie Zhou, Lei Feng, Zhixiang Yang, Fanqin Zhou, Wenjing Li

Then, we propose a framework based on generative diffusion models (GDMs) that iteratively denoises toward reward maximization to generate a matching strategy that meets specific requirements.

Prototype-based Optimal Transport for Out-of-Distribution Detection

no code implementations10 Oct 2024 Ao Ke, Wenlong Chen, Chuanwen Feng, Yukun Cao, Xike Xie, S. Kevin Zhou, Lei Feng

In this paper, inspired by the inherent distribution shift between ID and OOD data, we propose a novel method that leverages optimal transport to measure the distribution discrepancy between test inputs and ID prototypes.

Out-of-Distribution Detection

Artificial intelligence inspired freeform optics design: a review

no code implementations18 Sep 2024 Lei Feng, Jingxing Liao, Jingna Yang

Integrating artificial intelligence (AI) techniques such as machine learning and deep learning into freeform optics design has significantly enhanced design efficiency, expanded the design space, and led to innovative solutions.

AsyCo: An Asymmetric Dual-task Co-training Model for Partial-label Learning

1 code implementation21 Jul 2024 Beibei Li, Yiyuan Zheng, Beihong Jin, Tao Xiang, Haobo Wang, Lei Feng

Specifically, the disambiguation network is trained with self-training PLL task to learn label confidence, while the auxiliary network is trained in a supervised learning paradigm to learn from the noisy pairwise similarity labels that are constructed according to the learned label confidence.

Partial Label Learning Partially Labeled Datasets +1

MacroHFT: Memory Augmented Context-aware Reinforcement Learning On High Frequency Trading

1 code implementation20 Jun 2024 Chuqiao Zong, Chaojie Wang, Molei Qin, Lei Feng, Xinrun Wang, Bo An

To tackle these problems, we propose a novel Memory Augmented Context-aware Reinforcement learning method On HFT, \emph{a. k. a.}

Algorithmic Trading Decision Making +5

Candidate Pseudolabel Learning: Enhancing Vision-Language Models by Prompt Tuning with Unlabeled Data

1 code implementation15 Jun 2024 Jiahan Zhang, Qi Wei, Feng Liu, Lei Feng

To alleviate this issue, we propose a Candidate Pseudolabel Learning method, termed CPL, to fine-tune VLMs with suitable candidate pseudolabels of unlabeled data in downstream tasks.

Revolutionizing Wireless Networks with Self-Supervised Learning: A Pathway to Intelligent Communications

no code implementations11 Jun 2024 Zhixiang Yang, Hongyang Du, Dusit Niyato, Xudong Wang, Yu Zhou, Lei Feng, Fanqin Zhou, Wenjing Li, Xuesong Qiu

With the rapid proliferation of mobile devices and data, next-generation wireless communication systems face stringent requirements for ultra-low latency, ultra-high reliability, and massive connectivity.

Self-Supervised Learning Semantic Communication

Visual-Text Cross Alignment: Refining the Similarity Score in Vision-Language Models

1 code implementation5 Jun 2024 Jinhao Li, Haopeng Li, Sarah Erfani, Lei Feng, James Bailey, Feng Liu

The local visual areas are then cross-aligned with the finer descriptions by creating a similarity matrix using the pre-trained VLM.

Few-Shot Learning Language Modeling +5

Sample-specific Masks for Visual Reprogramming-based Prompting

1 code implementation5 Jun 2024 Chengyi Cai, Zesheng Ye, Lei Feng, Jianzhong Qi, Feng Liu

Since we generate different masks for individual samples, SMM is theoretically shown to reduce approximation error for the target tasks compared with existing state-of-the-art VR methods.

BDetCLIP: Multimodal Prompting Contrastive Test-Time Backdoor Detection

no code implementations24 May 2024 Yuwei Niu, Shuo He, Qi Wei, Zongyu Wu, Feng Liu, Lei Feng

In this paper, we provide the first attempt at a computationally efficient backdoor detection method to defend against backdoored CLIP in the inference stage.

Contrastive Learning Language Modelling +2

Improving Generalization of Deep Neural Networks by Optimum Shifting

no code implementations23 May 2024 Yuyan Zhou, Ye Li, Lei Feng, Sheng-Jun Huang

Recent studies showed that the generalization of neural networks is correlated with the sharpness of the loss landscape, and flat minima suggests a better generalization ability than sharp minima.

Mitigating Privacy Risk in Membership Inference by Convex-Concave Loss

1 code implementation8 Feb 2024 Zhenlong Liu, Lei Feng, Huiping Zhuang, Xiaofeng Cao, Hongxin Wei

In this work, we propose a novel method -- Convex-Concave Loss, which enables a high variance of training loss distribution by gradient descent.

Does confidence calibration improve conformal prediction?

1 code implementation6 Feb 2024 Huajun Xi, Jianguo Huang, Kangdao Liu, Lei Feng, Hongxin Wei

To address this issue, we propose Conformal Temperature Scaling (ConfTS), a variant of temperature scaling with a novel loss function designed to enhance the efficiency of prediction sets.

Conformal Prediction Uncertainty Quantification

A General Framework for Learning from Weak Supervision

1 code implementation2 Feb 2024 Hao Chen, Jindong Wang, Lei Feng, Xiang Li, Yidong Wang, Xing Xie, Masashi Sugiyama, Rita Singh, Bhiksha Raj

Weakly supervised learning generally faces challenges in applicability to various scenarios with diverse weak supervision and in scalability due to the complexity of existing algorithms, thereby hindering the practical deployment.

Weakly-supervised Learning

Debiased Sample Selection for Combating Noisy Labels

1 code implementation24 Jan 2024 Qi Wei, Lei Feng, Haobo Wang, Bo An

To address this limitation, we propose a noIse-Tolerant Expert Model (ITEM) for debiased learning in sample selection.

Learning with noisy labels

Positive-Unlabeled Learning by Latent Group-Aware Meta Disambiguation

1 code implementation CVPR 2024 Lin Long, Haobo Wang, Zhijie Jiang, Lei Feng, Chang Yao, Gang Chen, Junbo Zhao

To cope with this problem we propose a novel PU learning framework namely Latent Group-Aware Meta Disambiguation (LaGAM) which incorporates a hierarchical contrastive learning module to extract the underlying grouping semantics within PU data and produce compact representations.

Binary Classification Contrastive Learning +1

Targeted Representation Alignment for Open-World Semi-Supervised Learning

1 code implementation CVPR 2024 Ruixuan Xiao, Lei Feng, Kai Tang, Junbo Zhao, Yixuan Li, Gang Chen, Haobo Wang

Open-world Semi-Supervised Learning aims to classify unlabeled samples utilizing information from labeled data while unlabeled samples are not only from the labeled known categories but also from novel categories previously unseen.

Maximum Separation Open-World Semi-Supervised Learning

keqing: knowledge-based question answering is a nature chain-of-thought mentor of LLM

no code implementations31 Dec 2023 Chaojie Wang, Yishi Xu, Zhong Peng, Chenxi Zhang, Bo Chen, Xinrun Wang, Lei Feng, Bo An

Large language models (LLMs) have exhibited remarkable performance on various natural language processing (NLP) tasks, especially for question answering.

Information Retrieval Question Answering +1

Late Stopping: Avoiding Confidently Learning from Mislabeled Examples

no code implementations ICCV 2023 Suqin Yuan, Lei Feng, Tongliang Liu

Sample selection is a prevalent method in learning with noisy labels, where small-loss data are typically considered as correctly labeled data.

Learning with noisy labels

Multi-Label Knowledge Distillation

1 code implementation ICCV 2023 Penghui Yang, Ming-Kun Xie, Chen-Chen Zong, Lei Feng, Gang Niu, Masashi Sugiyama, Sheng-Jun Huang

Existing knowledge distillation methods typically work by imparting the knowledge of output logits or intermediate feature maps from the teacher network to the student network, which is very successful in multi-class single-label learning.

Binary Classification Knowledge Distillation +1

Exploiting Counter-Examples for Active Learning with Partial labels

no code implementations14 Jul 2023 Fei Zhang, Yunjie Ye, Lei Feng, Zhongwen Rao, Jieming Zhu, Marcus Kalander, Chen Gong, Jianye Hao, Bo Han

In this setting, an oracle annotates the query samples with partial labels, relaxing the oracle from the demanding accurate labeling process.

Active Learning

Partial-label Learning with Mixed Closed-set and Open-set Out-of-candidate Examples

no code implementations2 Jul 2023 Shuo He, Lei Feng, Guowu Yang

In this paper, we term the examples whose true label is outside the candidate label set OOC (out-of-candidate) examples, and pioneer a new PLL study to learn with OOC examples.

Partial Label Learning

A Universal Unbiased Method for Classification from Aggregate Observations

no code implementations20 Jun 2023 Zixi Wei, Lei Feng, Bo Han, Tongliang Liu, Gang Niu, Xiaofeng Zhu, Heng Tao Shen

This motivates the study on classification from aggregate observations (CFAO), where the supervision is provided to groups of instances, instead of individual instances.

Classification Multiple Instance Learning

Weakly Supervised Regression with Interval Targets

no code implementations18 Jun 2023 Xin Cheng, Yuzhou Cao, Ximing Li, Bo An, Lei Feng

Third, we propose a statistically consistent limiting method for RIT to train the model by limiting the predictions to the interval.

regression

Partial-Label Regression

1 code implementation AAAI 2023 Xin Cheng, Deng-Bao Wang, Lei Feng, Min-Ling Zhang, Bo An

Our proposed methods are theoretically grounded and can be compatible with any models, optimizers, and losses.

Partial Label Learning regression +1

A Generalized Unbiased Risk Estimator for Learning with Augmented Classes

1 code implementation12 Jun 2023 Senlin Shu, Shuo He, Haobo Wang, Hongxin Wei, Tao Xiang, Lei Feng

In this paper, we propose a generalized URE that can be equipped with arbitrary loss functions while maintaining the theoretical guarantees, given unlabeled data for LAC.

Multi-class Classification

CroSel: Cross Selection of Confident Pseudo Labels for Partial-Label Learning

no code implementations CVPR 2024 Shiyu Tian, Hongxin Wei, Yiqun Wang, Lei Feng

In this paper, we propose a new method called CroSel, which leverages historical predictions from the model to identify true labels for most training examples.

Partial Label Learning Weakly-supervised Learning

Fine-Grained Classification with Noisy Labels

no code implementations CVPR 2023 Qi Wei, Lei Feng, Haoliang Sun, Ren Wang, Chenhui Guo, Yilong Yin

To this end, we propose a novel framework called stochastic noise-tolerated supervised contrastive learning (SNSCL) that confronts label noise by encouraging distinguishable representation.

Classification Contrastive Learning +1

xURLLC-Aware Service Provisioning in Vehicular Networks: A Semantic Communication Perspective

no code implementations23 Feb 2023 Le Xia, Yao Sun, Dusit Niyato, Daquan Feng, Lei Feng, Muhammad Ali Imran

Semantic communication (SemCom), as an emerging paradigm focusing on meaning delivery, has recently been considered a promising solution for the inevitable crisis of scarce communication resources.

Knowledge Base Construction Semantic Communication

Mitigating Memorization of Noisy Labels by Clipping the Model Prediction

no code implementations8 Dec 2022 Hongxin Wei, Huiping Zhuang, Renchunzi Xie, Lei Feng, Gang Niu, Bo An, Yixuan Li

In the presence of noisy labels, designing robust loss functions is critical for securing the generalization performance of deep neural networks.

Memorization

Generalized Consistent Multi-Class Classification with Rejection to be Compatible with Arbitrary Losses

2 code implementations Conference 2022 Yuzhou Cao, Tianchi Cai, Lei Feng, Lihong Gu, Jinjie Gu, Bo An, Gang Niu, Masashi Sugiyama

\emph{Classification with rejection} (CwR) refrains from making a prediction to avoid critical misclassification when encountering test samples that are difficult to classify.

Classification Multi-class Classification

SoLar: Sinkhorn Label Refinery for Imbalanced Partial-Label Learning

1 code implementation21 Sep 2022 Haobo Wang, Mingxuan Xia, Yixuan Li, YUREN MAO, Lei Feng, Gang Chen, Junbo Zhao

Partial-label learning (PLL) is a peculiar weakly-supervised learning task where the training samples are generally associated with a set of candidate labels instead of single ground truth.

Partial Label Learning Weakly-supervised Learning

ProMix: Combating Label Noise via Maximizing Clean Sample Utility

1 code implementation21 Jul 2022 Ruixuan Xiao, Yiwen Dong, Haobo Wang, Lei Feng, Runze Wu, Gang Chen, Junbo Zhao

To overcome the potential side effect of excessive clean set selection procedure, we further devise a novel SSL framework that is able to train balanced and unbiased classifiers on the separated clean and noisy samples.

Learning with noisy labels

Open-Sampling: Exploring Out-of-Distribution data for Re-balancing Long-tailed datasets

3 code implementations17 Jun 2022 Hongxin Wei, Lue Tao, Renchunzi Xie, Lei Feng, Bo An

Deep neural networks usually perform poorly when the training dataset suffers from extreme class imbalance.

Mitigating Neural Network Overconfidence with Logit Normalization

2 code implementations19 May 2022 Hongxin Wei, Renchunzi Xie, Hao Cheng, Lei Feng, Bo An, Yixuan Li

Our method is motivated by the analysis that the norm of the logit keeps increasing during training, leading to overconfident output.

Can Adversarial Training Be Manipulated By Non-Robust Features?

1 code implementation31 Jan 2022 Lue Tao, Lei Feng, Hongxin Wei, JinFeng Yi, Sheng-Jun Huang, Songcan Chen

Under this threat, we show that adversarial training using a conventional defense budget $\epsilon$ provably fails to provide test robustness in a simple statistical setting, where the non-robust features of the training data can be reinforced by $\epsilon$-bounded perturbation.

PiCO+: Contrastive Label Disambiguation for Robust Partial Label Learning

1 code implementation22 Jan 2022 Haobo Wang, Ruixuan Xiao, Yixuan Li, Lei Feng, Gang Niu, Gang Chen, Junbo Zhao

Partial label learning (PLL) is an important problem that allows each training example to be labeled with a coarse candidate set, which well suits many real-world data annotation scenarios with label ambiguity.

Contrastive Learning Partial Label Learning +2

GearNet: Stepwise Dual Learning for Weakly Supervised Domain Adaptation

3 code implementations16 Jan 2022 Renchunzi Xie, Hongxin Wei, Lei Feng, Bo An

Although there have been a few studies on this problem, most of them only exploit unidirectional relationships from the source domain to the target domain.

Domain Adaptation

Rethinking Calibration of Deep Neural Networks: Do Not Be Afraid of Overconfidence

no code implementations NeurIPS 2021 Deng-Bao Wang, Lei Feng, Min-Ling Zhang

Capturing accurate uncertainty quantification of the prediction from deep neural networks is important in many real-world decision-making applications.

Decision Making Uncertainty Quantification

Who Is Your Right Mixup Partner in Positive and Unlabeled Learning

no code implementations ICLR 2022 Changchun Li, Ximing Li, Lei Feng, Jihong Ouyang

In this paper, we propose a novel PU learning method, namely Positive and unlabeled learning with Partially Positive Mixup (P3Mix), which simultaneously benefits from data augmentation and supervision correction with a heuristic mixup technique.

Data Augmentation

Contrastive Label Disambiguation for Partial Label Learning

1 code implementation ICLR 2022 Haobo Wang, Ruixuan Xiao, Sharon Li, Lei Feng, Gang Niu, Gang Chen, Junbo Zhao

Partial label learning (PLL) is an important problem that allows each training example to be labeled with a coarse candidate set, which well suits many real-world data annotation scenarios with label ambiguity.

Contrastive Learning Partial Label Learning +2

Exploiting Class Activation Value for Partial-Label Learning

3 code implementations ICLR 2022 Fei Zhang, Lei Feng, Bo Han, Tongliang Liu, Gang Niu, Tao Qin, Masashi Sugiyama

As the first contribution, we empirically show that the class activation map (CAM), a simple technique for discriminating the learning patterns of each class in images, is surprisingly better at making accurate predictions than the model itself on selecting the true label from candidate labels.

Multi-class Classification Partial Label Learning

Open-sampling: Re-balancing Long-tailed Datasets with Out-of-Distribution Data

no code implementations29 Sep 2021 Hongxin Wei, Lue Tao, Renchunzi Xie, Lei Feng, Bo An

Deep neural networks usually perform poorly when the training dataset suffers from extreme class imbalance.

Multi-Class Classification from Single-Class Data with Confidences

no code implementations16 Jun 2021 Yuzhou Cao, Lei Feng, Senlin Shu, Yitian Xu, Bo An, Gang Niu, Masashi Sugiyama

We show that without any assumptions on the loss functions, models, and optimizers, we can successfully learn a multi-class classifier from only data of a single class with a rigorous consistency guarantee when confidences (i. e., the class-posterior probabilities for all the classes) are available.

Classification Multi-class Classification

On the Robustness of Average Losses for Partial-Label Learning

no code implementations11 Jun 2021 Jiaqi Lv, Biao Liu, Lei Feng, Ning Xu, Miao Xu, Bo An, Gang Niu, Xin Geng, Masashi Sugiyama

Partial-label learning (PLL) utilizes instances with PLs, where a PL includes several candidate labels but only one is the true label (TL).

Partial Label Learning Weakly Supervised Classification

Learning from Similarity-Confidence Data

no code implementations13 Feb 2021 Yuzhou Cao, Lei Feng, Yitian Xu, Bo An, Gang Niu, Masashi Sugiyama

Weakly supervised learning has drawn considerable attention recently to reduce the expensive time and labor consumption of labeling massive data.

Weakly-supervised Learning

Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training

2 code implementations NeurIPS 2021 Lue Tao, Lei Feng, JinFeng Yi, Sheng-Jun Huang, Songcan Chen

Delusive attacks aim to substantially deteriorate the test accuracy of the learning model by slightly perturbing the features of correctly labeled training examples.

Attention Is Not Enough: Mitigating the Distribution Discrepancy in Asynchronous Multimodal Sequence Fusion

no code implementations ICCV 2021 Tao Liang, Guosheng Lin, Lei Feng, Yan Zhang, Fengmao Lv

To this end, both the marginal distribution and the elements with high-confidence correlations are aligned over the common space of the query and key vectors which are computed from different modalities.

Time Series Time Series Analysis +1

With False Friends Like These, Who Can Notice Mistakes?

1 code implementation29 Dec 2020 Lue Tao, Lei Feng, JinFeng Yi, Songcan Chen

In this paper, we unveil the threat of hypocritical examples -- inputs that are originally misclassified yet perturbed by a false friend to force correct predictions.

MetaInfoNet: Learning Task-Guided Information for Sample Reweighting

no code implementations9 Dec 2020 Hongxin Wei, Lei Feng, Rundong Wang, Bo An

Deep neural networks have been shown to easily overfit to biased training data with label noise or class imbalance.

Meta-Learning

SemiNLL: A Framework of Noisy-Label Learning by Semi-Supervised Learning

no code implementations2 Dec 2020 Zhuowei Wang, Jing Jiang, Bo Han, Lei Feng, Bo An, Gang Niu, Guodong Long

We also instantiate our framework with different combinations, which set the new state of the art on benchmark-simulated and real-world datasets with noisy labels.

Learning with noisy labels

Pointwise Binary Classification with Pairwise Confidence Comparisons

no code implementations5 Oct 2020 Lei Feng, Senlin Shu, Nan Lu, Bo Han, Miao Xu, Gang Niu, Bo An, Masashi Sugiyama

To alleviate the data requirement for training effective binary classifiers in binary classification, many weakly supervised learning settings have been proposed.

Binary Classification Classification +2

COMET: Convolutional Dimension Interaction for Collaborative Filtering

no code implementations28 Jul 2020 Zhuoyi Lin, Lei Feng, Xingzhi Guo, Yu Zhang, Rui Yin, Chee Keong Kwoh, Chi Xu

In this paper, we propose a novel representation learning-based model called COMET (COnvolutional diMEnsion inTeraction), which simultaneously models the high-order interaction patterns among historical interactions and embedding dimensions.

Collaborative Filtering Representation Learning

GLIMG: Global and Local Item Graphs for Top-N Recommender Systems

no code implementations28 Jul 2020 Zhuoyi Lin, Lei Feng, Rui Yin, Chi Xu, Chee-Keong Kwoh

We argue that recommendation on global and local graphs outperforms that on a single global graph or multiple local graphs.

Recommendation Systems

Provably Consistent Partial-Label Learning

no code implementations NeurIPS 2020 Lei Feng, Jiaqi Lv, Bo Han, Miao Xu, Gang Niu, Xin Geng, Bo An, Masashi Sugiyama

Partial-label learning (PLL) is a multi-class classification problem, where each training example is associated with a set of candidate labels.

Multi-class Classification Partial Label Learning

Combating noisy labels by agreement: A joint training method with co-regularization

2 code implementations CVPR 2020 Hongxin Wei, Lei Feng, Xiangyu Chen, Bo An

The state-of-the-art approaches "Decoupling" and "Co-teaching+" claim that the "disagreement" strategy is crucial for alleviating the problem of learning with noisy labels.

Diversity Learning with noisy labels +1

Progressive Identification of True Labels for Partial-Label Learning

1 code implementation ICML 2020 Jiaqi Lv, Miao Xu, Lei Feng, Gang Niu, Xin Geng, Masashi Sugiyama

Partial-label learning (PLL) is a typical weakly supervised learning problem, where each training instance is equipped with a set of candidate labels among which only one is the true label.

Partial Label Learning Stochastic Optimization +1

Learning with Multiple Complementary Labels

no code implementations ICML 2020 Lei Feng, Takuo Kaneko, Bo Han, Gang Niu, Bo An, Masashi Sugiyama

In this paper, we propose a novel problem setting to allow MCLs for each example and two ways for learning with MCLs.

Partial Label Learning with Self-Guided Retraining

no code implementations8 Feb 2019 Lei Feng, Bo An

We show that optimizing this convex-concave problem is equivalent to solving a set of quadratic programming (QP) problems.

Partial Label Learning

Collaboration based Multi-Label Learning

no code implementations8 Feb 2019 Lei Feng, Bo An, Shuo He

It is well-known that exploiting label correlations is crucially important to multi-label learning.

Multi-Label Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.