Search Results for author: Lei Feng

Found 50 papers, 20 papers with code

Mitigating Privacy Risk in Membership Inference by Convex-Concave Loss

no code implementations8 Feb 2024 Zhenlong Liu, Lei Feng, Huiping Zhuang, Xiaofeng Cao, Hongxin Wei

In this work, we propose a novel method -- Convex-Concave Loss, which enables a high variance of training loss distribution by gradient descent.

Does Confidence Calibration Help Conformal Prediction?

no code implementations6 Feb 2024 Huajun Xi, Jianguo Huang, Lei Feng, Hongxin Wei

Conformal prediction, as an emerging uncertainty qualification technique, constructs prediction sets that are guaranteed to contain the true label with high probability.

Conformal Prediction

A General Framework for Learning from Weak Supervision

1 code implementation2 Feb 2024 Hao Chen, Jindong Wang, Lei Feng, Xiang Li, Yidong Wang, Xing Xie, Masashi Sugiyama, Rita Singh, Bhiksha Raj

Weakly supervised learning generally faces challenges in applicability to various scenarios with diverse weak supervision and in scalability due to the complexity of existing algorithms, thereby hindering the practical deployment.

Weakly-supervised Learning

Debiased Sample Selection for Combating Noisy Labels

1 code implementation24 Jan 2024 Qi Wei, Lei Feng, Haobo Wang, Bo An

To address this limitation, we propose a noIse-Tolerant Expert Model (ITEM) for debiased learning in sample selection.

Learning with noisy labels

keqing: knowledge-based question answering is a nature chain-of-thought mentor of LLM

no code implementations31 Dec 2023 Chaojie Wang, Yishi Xu, Zhong Peng, Chenxi Zhang, Bo Chen, Xinrun Wang, Lei Feng, Bo An

Large language models (LLMs) have exhibited remarkable performance on various natural language processing (NLP) tasks, especially for question answering.

Information Retrieval Question Answering +1

Late Stopping: Avoiding Confidently Learning from Mislabeled Examples

no code implementations ICCV 2023 Suqin Yuan, Lei Feng, Tongliang Liu

Sample selection is a prevalent method in learning with noisy labels, where small-loss data are typically considered as correctly labeled data.

Learning with noisy labels

Multi-Label Knowledge Distillation

1 code implementation ICCV 2023 Penghui Yang, Ming-Kun Xie, Chen-Chen Zong, Lei Feng, Gang Niu, Masashi Sugiyama, Sheng-Jun Huang

Existing knowledge distillation methods typically work by imparting the knowledge of output logits or intermediate feature maps from the teacher network to the student network, which is very successful in multi-class single-label learning.

Binary Classification Knowledge Distillation +1

Exploiting Counter-Examples for Active Learning with Partial labels

no code implementations14 Jul 2023 Fei Zhang, Yunjie Ye, Lei Feng, Zhongwen Rao, Jieming Zhu, Marcus Kalander, Chen Gong, Jianye Hao, Bo Han

In this setting, an oracle annotates the query samples with partial labels, relaxing the oracle from the demanding accurate labeling process.

Active Learning

Partial-label Learning with Mixed Closed-set and Open-set Out-of-candidate Examples

no code implementations2 Jul 2023 Shuo He, Lei Feng, Guowu Yang

In this paper, we term the examples whose true label is outside the candidate label set OOC (out-of-candidate) examples, and pioneer a new PLL study to learn with OOC examples.

Partial Label Learning

A Universal Unbiased Method for Classification from Aggregate Observations

no code implementations20 Jun 2023 Zixi Wei, Lei Feng, Bo Han, Tongliang Liu, Gang Niu, Xiaofeng Zhu, Heng Tao Shen

This motivates the study on classification from aggregate observations (CFAO), where the supervision is provided to groups of instances, instead of individual instances.

Classification Multiple Instance Learning

Weakly Supervised Regression with Interval Targets

no code implementations18 Jun 2023 Xin Cheng, Yuzhou Cao, Ximing Li, Bo An, Lei Feng

Third, we propose a statistically consistent limiting method for RIT to train the model by limiting the predictions to the interval.

regression

Partial-Label Regression

1 code implementation AAAI 2023 Xin Cheng, Deng-Bao Wang, Lei Feng, Min-Ling Zhang, Bo An

Our proposed methods are theoretically grounded and can be compatible with any models, optimizers, and losses.

Partial Label Learning regression +1

A Generalized Unbiased Risk Estimator for Learning with Augmented Classes

1 code implementation12 Jun 2023 Senlin Shu, Shuo He, Haobo Wang, Hongxin Wei, Tao Xiang, Lei Feng

In this paper, we propose a generalized URE that can be equipped with arbitrary loss functions while maintaining the theoretical guarantees, given unlabeled data for LAC.

Multi-class Classification

CroSel: Cross Selection of Confident Pseudo Labels for Partial-Label Learning

no code implementations18 Mar 2023 Shiyu Tian, Hongxin Wei, Yiqun Wang, Lei Feng

In this paper, we propose a new method called CroSel, which leverages historical predictions from the model to identify true labels for most training examples.

Partial Label Learning Weakly-supervised Learning

Fine-Grained Classification with Noisy Labels

no code implementations CVPR 2023 Qi Wei, Lei Feng, Haoliang Sun, Ren Wang, Chenhui Guo, Yilong Yin

To this end, we propose a novel framework called stochastic noise-tolerated supervised contrastive learning (SNSCL) that confronts label noise by encouraging distinguishable representation.

Classification Contrastive Learning +1

xURLLC-Aware Service Provisioning in Vehicular Networks: A Semantic Communication Perspective

no code implementations23 Feb 2023 Le Xia, Yao Sun, Dusit Niyato, Daquan Feng, Lei Feng, Muhammad Ali Imran

Semantic communication (SemCom), as an emerging paradigm focusing on meaning delivery, has recently been considered a promising solution for the inevitable crisis of scarce communication resources.

Mitigating Memorization of Noisy Labels by Clipping the Model Prediction

no code implementations8 Dec 2022 Hongxin Wei, Huiping Zhuang, Renchunzi Xie, Lei Feng, Gang Niu, Bo An, Yixuan Li

In the presence of noisy labels, designing robust loss functions is critical for securing the generalization performance of deep neural networks.

Memorization

Generalized Consistent Multi-Class Classification with Rejection to be Compatible with Arbitrary Losses

2 code implementations Conference 2022 Yuzhou Cao, Tianchi Cai, Lei Feng, Lihong Gu, Jinjie Gu, Bo An, Gang Niu, Masashi Sugiyama

\emph{Classification with rejection} (CwR) refrains from making a prediction to avoid critical misclassification when encountering test samples that are difficult to classify.

Classification Multi-class Classification

SoLar: Sinkhorn Label Refinery for Imbalanced Partial-Label Learning

1 code implementation21 Sep 2022 Haobo Wang, Mingxuan Xia, Yixuan Li, YUREN MAO, Lei Feng, Gang Chen, Junbo Zhao

Partial-label learning (PLL) is a peculiar weakly-supervised learning task where the training samples are generally associated with a set of candidate labels instead of single ground truth.

Partial Label Learning Weakly-supervised Learning

ProMix: Combating Label Noise via Maximizing Clean Sample Utility

1 code implementation21 Jul 2022 Ruixuan Xiao, Yiwen Dong, Haobo Wang, Lei Feng, Runze Wu, Gang Chen, Junbo Zhao

To overcome the potential side effect of excessive clean set selection procedure, we further devise a novel SSL framework that is able to train balanced and unbiased classifiers on the separated clean and noisy samples.

Learning with noisy labels

Open-Sampling: Exploring Out-of-Distribution data for Re-balancing Long-tailed datasets

3 code implementations17 Jun 2022 Hongxin Wei, Lue Tao, Renchunzi Xie, Lei Feng, Bo An

Deep neural networks usually perform poorly when the training dataset suffers from extreme class imbalance.

Mitigating Neural Network Overconfidence with Logit Normalization

2 code implementations19 May 2022 Hongxin Wei, Renchunzi Xie, Hao Cheng, Lei Feng, Bo An, Yixuan Li

Our method is motivated by the analysis that the norm of the logit keeps increasing during training, leading to overconfident output.

Can Adversarial Training Be Manipulated By Non-Robust Features?

1 code implementation31 Jan 2022 Lue Tao, Lei Feng, Hongxin Wei, JinFeng Yi, Sheng-Jun Huang, Songcan Chen

Under this threat, we show that adversarial training using a conventional defense budget $\epsilon$ provably fails to provide test robustness in a simple statistical setting, where the non-robust features of the training data can be reinforced by $\epsilon$-bounded perturbation.

PiCO+: Contrastive Label Disambiguation for Robust Partial Label Learning

1 code implementation22 Jan 2022 Haobo Wang, Ruixuan Xiao, Yixuan Li, Lei Feng, Gang Niu, Gang Chen, Junbo Zhao

Partial label learning (PLL) is an important problem that allows each training example to be labeled with a coarse candidate set, which well suits many real-world data annotation scenarios with label ambiguity.

Contrastive Learning Partial Label Learning +2

GearNet: Stepwise Dual Learning for Weakly Supervised Domain Adaptation

3 code implementations16 Jan 2022 Renchunzi Xie, Hongxin Wei, Lei Feng, Bo An

Although there have been a few studies on this problem, most of them only exploit unidirectional relationships from the source domain to the target domain.

Domain Adaptation

Rethinking Calibration of Deep Neural Networks: Do Not Be Afraid of Overconfidence

no code implementations NeurIPS 2021 Deng-Bao Wang, Lei Feng, Min-Ling Zhang

Capturing accurate uncertainty quantification of the prediction from deep neural networks is important in many real-world decision-making applications.

Decision Making Uncertainty Quantification

Contrastive Label Disambiguation for Partial Label Learning

1 code implementation ICLR 2022 Haobo Wang, Ruixuan Xiao, Sharon Li, Lei Feng, Gang Niu, Gang Chen, Junbo Zhao

Partial label learning (PLL) is an important problem that allows each training example to be labeled with a coarse candidate set, which well suits many real-world data annotation scenarios with label ambiguity.

Contrastive Learning Partial Label Learning +2

Open-sampling: Re-balancing Long-tailed Datasets with Out-of-Distribution Data

no code implementations29 Sep 2021 Hongxin Wei, Lue Tao, Renchunzi Xie, Lei Feng, Bo An

Deep neural networks usually perform poorly when the training dataset suffers from extreme class imbalance.

Who Is Your Right Mixup Partner in Positive and Unlabeled Learning

no code implementations ICLR 2022 Changchun Li, Ximing Li, Lei Feng, Jihong Ouyang

In this paper, we propose a novel PU learning method, namely Positive and unlabeled learning with Partially Positive Mixup (P3Mix), which simultaneously benefits from data augmentation and supervision correction with a heuristic mixup technique.

Data Augmentation

Exploiting Class Activation Value for Partial-Label Learning

3 code implementations ICLR 2022 Fei Zhang, Lei Feng, Bo Han, Tongliang Liu, Gang Niu, Tao Qin, Masashi Sugiyama

As the first contribution, we empirically show that the class activation map (CAM), a simple technique for discriminating the learning patterns of each class in images, is surprisingly better at making accurate predictions than the model itself on selecting the true label from candidate labels.

Multi-class Classification Partial Label Learning

Multi-Class Classification from Single-Class Data with Confidences

no code implementations16 Jun 2021 Yuzhou Cao, Lei Feng, Senlin Shu, Yitian Xu, Bo An, Gang Niu, Masashi Sugiyama

We show that without any assumptions on the loss functions, models, and optimizers, we can successfully learn a multi-class classifier from only data of a single class with a rigorous consistency guarantee when confidences (i. e., the class-posterior probabilities for all the classes) are available.

Classification Multi-class Classification

On the Robustness of Average Losses for Partial-Label Learning

no code implementations11 Jun 2021 Jiaqi Lv, Biao Liu, Lei Feng, Ning Xu, Miao Xu, Bo An, Gang Niu, Xin Geng, Masashi Sugiyama

Partial-label learning (PLL) utilizes instances with PLs, where a PL includes several candidate labels but only one is the true label (TL).

Partial Label Learning Weakly Supervised Classification

Learning from Similarity-Confidence Data

no code implementations13 Feb 2021 Yuzhou Cao, Lei Feng, Yitian Xu, Bo An, Gang Niu, Masashi Sugiyama

Weakly supervised learning has drawn considerable attention recently to reduce the expensive time and labor consumption of labeling massive data.

Weakly-supervised Learning

Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training

2 code implementations NeurIPS 2021 Lue Tao, Lei Feng, JinFeng Yi, Sheng-Jun Huang, Songcan Chen

Delusive attacks aim to substantially deteriorate the test accuracy of the learning model by slightly perturbing the features of correctly labeled training examples.

Attention Is Not Enough: Mitigating the Distribution Discrepancy in Asynchronous Multimodal Sequence Fusion

no code implementations ICCV 2021 Tao Liang, Guosheng Lin, Lei Feng, Yan Zhang, Fengmao Lv

To this end, both the marginal distribution and the elements with high-confidence correlations are aligned over the common space of the query and key vectors which are computed from different modalities.

Time Series Time Series Analysis +1

With False Friends Like These, Who Can Notice Mistakes?

1 code implementation29 Dec 2020 Lue Tao, Lei Feng, JinFeng Yi, Songcan Chen

In this paper, we unveil the threat of hypocritical examples -- inputs that are originally misclassified yet perturbed by a false friend to force correct predictions.

MetaInfoNet: Learning Task-Guided Information for Sample Reweighting

no code implementations9 Dec 2020 Hongxin Wei, Lei Feng, Rundong Wang, Bo An

Deep neural networks have been shown to easily overfit to biased training data with label noise or class imbalance.

Meta-Learning

SemiNLL: A Framework of Noisy-Label Learning by Semi-Supervised Learning

no code implementations2 Dec 2020 Zhuowei Wang, Jing Jiang, Bo Han, Lei Feng, Bo An, Gang Niu, Guodong Long

We also instantiate our framework with different combinations, which set the new state of the art on benchmark-simulated and real-world datasets with noisy labels.

Learning with noisy labels

Pointwise Binary Classification with Pairwise Confidence Comparisons

no code implementations5 Oct 2020 Lei Feng, Senlin Shu, Nan Lu, Bo Han, Miao Xu, Gang Niu, Bo An, Masashi Sugiyama

To alleviate the data requirement for training effective binary classifiers in binary classification, many weakly supervised learning settings have been proposed.

Binary Classification Classification +2

GLIMG: Global and Local Item Graphs for Top-N Recommender Systems

no code implementations28 Jul 2020 Zhuoyi Lin, Lei Feng, Rui Yin, Chi Xu, Chee-Keong Kwoh

We argue that recommendation on global and local graphs outperforms that on a single global graph or multiple local graphs.

Recommendation Systems

COMET: Convolutional Dimension Interaction for Collaborative Filtering

no code implementations28 Jul 2020 Zhuoyi Lin, Lei Feng, Xingzhi Guo, Yu Zhang, Rui Yin, Chee Keong Kwoh, Chi Xu

In this paper, we propose a novel representation learning-based model called COMET (COnvolutional diMEnsion inTeraction), which simultaneously models the high-order interaction patterns among historical interactions and embedding dimensions.

Collaborative Filtering Representation Learning

Provably Consistent Partial-Label Learning

no code implementations NeurIPS 2020 Lei Feng, Jiaqi Lv, Bo Han, Miao Xu, Gang Niu, Xin Geng, Bo An, Masashi Sugiyama

Partial-label learning (PLL) is a multi-class classification problem, where each training example is associated with a set of candidate labels.

Multi-class Classification Partial Label Learning

Combating noisy labels by agreement: A joint training method with co-regularization

2 code implementations CVPR 2020 Hongxin Wei, Lei Feng, Xiangyu Chen, Bo An

The state-of-the-art approaches "Decoupling" and "Co-teaching+" claim that the "disagreement" strategy is crucial for alleviating the problem of learning with noisy labels.

Learning with noisy labels Weakly-supervised Learning

Progressive Identification of True Labels for Partial-Label Learning

1 code implementation ICML 2020 Jiaqi Lv, Miao Xu, Lei Feng, Gang Niu, Xin Geng, Masashi Sugiyama

Partial-label learning (PLL) is a typical weakly supervised learning problem, where each training instance is equipped with a set of candidate labels among which only one is the true label.

Partial Label Learning Stochastic Optimization +1

Learning with Multiple Complementary Labels

no code implementations ICML 2020 Lei Feng, Takuo Kaneko, Bo Han, Gang Niu, Bo An, Masashi Sugiyama

In this paper, we propose a novel problem setting to allow MCLs for each example and two ways for learning with MCLs.

Partial Label Learning with Self-Guided Retraining

no code implementations8 Feb 2019 Lei Feng, Bo An

We show that optimizing this convex-concave problem is equivalent to solving a set of quadratic programming (QP) problems.

Partial Label Learning

Collaboration based Multi-Label Learning

no code implementations8 Feb 2019 Lei Feng, Bo An, Shuo He

It is well-known that exploiting label correlations is crucially important to multi-label learning.

Multi-Label Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.