Search Results for author: Yuyuan Liu

Found 17 papers, 13 papers with code

Partial Label Supervision for Agnostic Generative Noisy Label Learning

1 code implementation2 Aug 2023 Fengbei Liu, Chong Wang, Yuanhong Chen, Yuyuan Liu, Gustavo Carneiro

Second, we introduce a new Partial Label Supervision (PLS) for noisy label learning that accounts for both clean label coverage and uncertainty.

Image Generation Learning with noisy labels +1

BRAIxDet: Learning to Detect Malignant Breast Lesion with Incomplete Annotations

no code implementations31 Jan 2023 Yuanhong Chen, Yuyuan Liu, Chong Wang, Michael Elliott, Chun Fung Kwok, Carlos Pena-Solorzano, Yu Tian, Fengbei Liu, Helen Frazer, Davis J. McCarthy, Gustavo Carneiro

Given the large size of such datasets, researchers usually face a dilemma with the weakly annotated subset: to not use it or to fully annotate it.

Lesion Detection

Learning Support and Trivial Prototypes for Interpretable Image Classification

1 code implementation ICCV 2023 Chong Wang, Yuyuan Liu, Yuanhong Chen, Fengbei Liu, Yu Tian, Davis J. McCarthy, Helen Frazer, Gustavo Carneiro

Prototypical part network (ProtoPNet) methods have been designed to achieve interpretable classification by associating predictions with a set of training prototypes, which we refer to as trivial prototypes because they are trained to lie far from the classification boundary in the feature space.

Explainable Artificial Intelligence (XAI) Image Classification +1

Knowledge Distillation to Ensemble Global and Interpretable Prototype-Based Mammogram Classification Models

no code implementations26 Sep 2022 Chong Wang, Yuanhong Chen, Yuyuan Liu, Yu Tian, Fengbei Liu, Davis J. McCarthy, Michael Elliott, Helen Frazer, Gustavo Carneiro

On the other hand, prototype-based models improve interpretability by associating predictions with training image prototypes, but they are less accurate than global models and their prototypes tend to have poor diversity.

Knowledge Distillation

Translation Consistent Semi-supervised Segmentation for 3D Medical Images

1 code implementation28 Mar 2022 Yuyuan Liu, Yu Tian, Chong Wang, Yuanhong Chen, Fengbei Liu, Vasileios Belagiannis, Gustavo Carneiro

The most successful SSL approaches are based on consistency learning that minimises the distance between model responses obtained from perturbed views of the unlabelled data.

Brain Tumor Segmentation Image Segmentation +5

Contrastive Transformer-based Multiple Instance Learning for Weakly Supervised Polyp Frame Detection

1 code implementation23 Mar 2022 Yu Tian, Guansong Pang, Fengbei Liu, Yuyuan Liu, Chong Wang, Yuanhong Chen, Johan W Verjans, Gustavo Carneiro

Current polyp detection methods from colonoscopy videos use exclusively normal (i. e., healthy) training images, which i) ignore the importance of temporal information in consecutive video frames, and ii) lack knowledge about the polyps.

Multiple Instance Learning Supervised Anomaly Detection +1

Unsupervised Anomaly Detection in Medical Images with a Memory-augmented Multi-level Cross-attentional Masked Autoencoder

1 code implementation22 Mar 2022 Yu Tian, Guansong Pang, Yuyuan Liu, Chong Wang, Yuanhong Chen, Fengbei Liu, Rajvinder Singh, Johan W Verjans, Mengyu Wang, Gustavo Carneiro

Our UAD approach, the memory-augmented multi-level cross-attentional masked autoencoder (MemMC-MAE), is a transformer-based approach, consisting of a novel memory-augmented self-attention operator for the encoder and a new multi-level cross-attention operator for the decoder.

Image Reconstruction Unsupervised Anomaly Detection

BoMD: Bag of Multi-label Descriptors for Noisy Chest X-ray Classification

2 code implementations ICCV 2023 Yuanhong Chen, Fengbei Liu, Hu Wang, Chong Wang, Yu Tian, Yuyuan Liu, Gustavo Carneiro

Deep learning methods have shown outstanding classification accuracy in medical imaging problems, which is largely attributed to the availability of large-scale datasets manually annotated with clean labels.

Multi-Label Classification

ACPL: Anti-curriculum Pseudo-labelling for Semi-supervised Medical Image Classification

1 code implementation CVPR 2022 Fengbei Liu, Yu Tian, Yuanhong Chen, Yuyuan Liu, Vasileios Belagiannis, Gustavo Carneiro

Effective semi-supervised learning (SSL) in medical image analysis (MIA) must address two challenges: 1) work effectively on both multi-class (e. g., lesion classification) and multi-label (e. g., multiple-disease diagnosis) problems, and 2) handle imbalanced learning (because of the high variance in disease prevalence).

Image Classification Multi-Label Classification +1

Perturbed and Strict Mean Teachers for Semi-supervised Semantic Segmentation

1 code implementation CVPR 2022 Yuyuan Liu, Yu Tian, Yuanhong Chen, Fengbei Liu, Vasileios Belagiannis, Gustavo Carneiro

The accurate prediction by this model allows us to use a challenging combination of network, input data and feature perturbations to improve the consistency learning generalisation, where the feature perturbations consist of a new adversarial perturbation.

Semi-Supervised Semantic Segmentation

Pixel-wise Energy-biased Abstention Learning for Anomaly Segmentation on Complex Urban Driving Scenes

3 code implementations24 Nov 2021 Yu Tian, Yuyuan Liu, Guansong Pang, Fengbei Liu, Yuanhong Chen, Gustavo Carneiro

However, previous uncertainty approaches that directly associate high uncertainty to anomaly may sometimes lead to incorrect anomaly predictions, and external reconstruction models tend to be too inefficient for real-time self-driving embedded systems.

Ranked #2 on Anomaly Detection on Lost and Found (using extra training data)

Anomaly Detection Segmentation +1

Self-supervised Pseudo Multi-class Pre-training for Unsupervised Anomaly Detection and Segmentation in Medical Images

2 code implementations3 Sep 2021 Yu Tian, Fengbei Liu, Guansong Pang, Yuanhong Chen, Yuyuan Liu, Johan W. Verjans, Rajvinder Singh, Gustavo Carneiro

Pre-training UAD methods with self-supervised learning, based on computer vision techniques, can mitigate this challenge, but they are sub-optimal because they do not explore domain knowledge for designing the pretext tasks, and their contrastive learning losses do not try to cluster the normal training images, which may result in a sparse distribution of normal images that is ineffective for anomaly detection.

Contrastive Learning Data Augmentation +2

NVUM: Non-Volatile Unbiased Memory for Robust Medical Image Classification

1 code implementation6 Mar 2021 Fengbei Liu, Yuanhong Chen, Yu Tian, Yuyuan Liu, Chong Wang, Vasileios Belagiannis, Gustavo Carneiro

In this paper, we propose a new training module called Non-Volatile Unbiased Memory (NVUM), which non-volatility stores running average of model logits for a new regularization loss on noisy multi-label problem.

Image Classification with Label Noise Learning with noisy labels +1

Cannot find the paper you are looking for? You can Submit a new open access paper.