Search Results for author: Yuanhong Chen

Found 23 papers, 16 papers with code

Mixture of Gaussian-distributed Prototypes with Generative Modelling for Interpretable Image Classification

no code implementations30 Nov 2023 Chong Wang, Yuanhong Chen, Fengbei Liu, Davis James McCarthy, Helen Frazer, Gustavo Carneiro

Such an approach enables the learning of more powerful prototype representations since each learned prototype will own a measure of variability, which naturally reduces the sparsity given the spread of the distribution around each prototype, and we also integrate a prototype diversity objective function into the GMM optimisation to reduce redundancy.

Decision Making Image Classification

Learnable Cross-modal Knowledge Distillation for Multi-modal Learning with Missing Modality

no code implementations2 Oct 2023 Hu Wang, Yuanhong Chen, Congbo Ma, Jodie Avery, Louise Hull, Gustavo Carneiro

Then, cross-modal knowledge distillation is performed between teacher and student modalities for each task to push the model parameters to a point that is beneficial for all tasks.

Knowledge Distillation

Partial Label Supervision for Agnostic Generative Noisy Label Learning

1 code implementation2 Aug 2023 Fengbei Liu, Chong Wang, Yuanhong Chen, Yuyuan Liu, Gustavo Carneiro

Second, we introduce a new Partial Label Supervision (PLS) for noisy label learning that accounts for both clean label coverage and uncertainty.

Image Generation Learning with noisy labels +1

Multi-modal Learning with Missing Modality via Shared-Specific Feature Modelling

no code implementations CVPR 2023 Hu Wang, Yuanhong Chen, Congbo Ma, Jodie Avery, Louise Hull, Gustavo Carneiro

This is achieved from a strategy that relies on auxiliary tasks based on distribution alignment and domain classification, in addition to a residual feature fusion procedure.

Classification domain classification +4

BRAIxDet: Learning to Detect Malignant Breast Lesion with Incomplete Annotations

no code implementations31 Jan 2023 Yuanhong Chen, Yuyuan Liu, Chong Wang, Michael Elliott, Chun Fung Kwok, Carlos Pena-Solorzano, Yu Tian, Fengbei Liu, Helen Frazer, Davis J. McCarthy, Gustavo Carneiro

Given the large size of such datasets, researchers usually face a dilemma with the weakly annotated subset: to not use it or to fully annotate it.

Lesion Detection

Learning Support and Trivial Prototypes for Interpretable Image Classification

1 code implementation ICCV 2023 Chong Wang, Yuyuan Liu, Yuanhong Chen, Fengbei Liu, Yu Tian, Davis J. McCarthy, Helen Frazer, Gustavo Carneiro

Prototypical part network (ProtoPNet) methods have been designed to achieve interpretable classification by associating predictions with a set of training prototypes, which we refer to as trivial prototypes because they are trained to lie far from the classification boundary in the feature space.

Explainable Artificial Intelligence (XAI) Image Classification +1

Asymmetric Co-teaching with Multi-view Consensus for Noisy Label Learning

no code implementations1 Jan 2023 Fengbei Liu, Yuanhong Chen, Chong Wang, Yu Tain, Gustavo Carneiro

Also, the new sample selection is based on multi-view consensus, which uses the label views from training labels and model predictions to divide the training set into clean and noisy for training the multi-class model and to re-label the training samples with multiple top-ranked labels for training the multi-label model.

Learning with noisy labels Multi-Label Learning

Knowledge Distillation to Ensemble Global and Interpretable Prototype-Based Mammogram Classification Models

no code implementations26 Sep 2022 Chong Wang, Yuanhong Chen, Yuyuan Liu, Yu Tian, Fengbei Liu, Davis J. McCarthy, Michael Elliott, Helen Frazer, Gustavo Carneiro

On the other hand, prototype-based models improve interpretability by associating predictions with training image prototypes, but they are less accurate than global models and their prototypes tend to have poor diversity.

Knowledge Distillation

Multi-view Local Co-occurrence and Global Consistency Learning Improve Mammogram Classification Generalisation

1 code implementation21 Sep 2022 Yuanhong Chen, Hu Wang, Chong Wang, Yu Tian, Fengbei Liu, Michael Elliott, Davis J. McCarthy, Helen Frazer, Gustavo Carneiro

When analysing screening mammograms, radiologists can naturally process information across two ipsilateral views of each breast, namely the cranio-caudal (CC) and mediolateral-oblique (MLO) views.

Translation Consistent Semi-supervised Segmentation for 3D Medical Images

1 code implementation28 Mar 2022 Yuyuan Liu, Yu Tian, Chong Wang, Yuanhong Chen, Fengbei Liu, Vasileios Belagiannis, Gustavo Carneiro

The most successful SSL approaches are based on consistency learning that minimises the distance between model responses obtained from perturbed views of the unlabelled data.

Brain Tumor Segmentation Image Segmentation +5

Contrastive Transformer-based Multiple Instance Learning for Weakly Supervised Polyp Frame Detection

1 code implementation23 Mar 2022 Yu Tian, Guansong Pang, Fengbei Liu, Yuyuan Liu, Chong Wang, Yuanhong Chen, Johan W Verjans, Gustavo Carneiro

Current polyp detection methods from colonoscopy videos use exclusively normal (i. e., healthy) training images, which i) ignore the importance of temporal information in consecutive video frames, and ii) lack knowledge about the polyps.

Multiple Instance Learning Supervised Anomaly Detection +1

Unsupervised Anomaly Detection in Medical Images with a Memory-augmented Multi-level Cross-attentional Masked Autoencoder

1 code implementation22 Mar 2022 Yu Tian, Guansong Pang, Yuyuan Liu, Chong Wang, Yuanhong Chen, Fengbei Liu, Rajvinder Singh, Johan W Verjans, Mengyu Wang, Gustavo Carneiro

Our UAD approach, the memory-augmented multi-level cross-attentional masked autoencoder (MemMC-MAE), is a transformer-based approach, consisting of a novel memory-augmented self-attention operator for the encoder and a new multi-level cross-attention operator for the decoder.

Image Reconstruction Unsupervised Anomaly Detection

BoMD: Bag of Multi-label Descriptors for Noisy Chest X-ray Classification

2 code implementations ICCV 2023 Yuanhong Chen, Fengbei Liu, Hu Wang, Chong Wang, Yu Tian, Yuyuan Liu, Gustavo Carneiro

Deep learning methods have shown outstanding classification accuracy in medical imaging problems, which is largely attributed to the availability of large-scale datasets manually annotated with clean labels.

Multi-Label Classification

Perturbed and Strict Mean Teachers for Semi-supervised Semantic Segmentation

1 code implementation CVPR 2022 Yuyuan Liu, Yu Tian, Yuanhong Chen, Fengbei Liu, Vasileios Belagiannis, Gustavo Carneiro

The accurate prediction by this model allows us to use a challenging combination of network, input data and feature perturbations to improve the consistency learning generalisation, where the feature perturbations consist of a new adversarial perturbation.

Semi-Supervised Semantic Segmentation

ACPL: Anti-curriculum Pseudo-labelling for Semi-supervised Medical Image Classification

1 code implementation CVPR 2022 Fengbei Liu, Yu Tian, Yuanhong Chen, Yuyuan Liu, Vasileios Belagiannis, Gustavo Carneiro

Effective semi-supervised learning (SSL) in medical image analysis (MIA) must address two challenges: 1) work effectively on both multi-class (e. g., lesion classification) and multi-label (e. g., multiple-disease diagnosis) problems, and 2) handle imbalanced learning (because of the high variance in disease prevalence).

Image Classification Multi-Label Classification +1

Pixel-wise Energy-biased Abstention Learning for Anomaly Segmentation on Complex Urban Driving Scenes

3 code implementations24 Nov 2021 Yu Tian, Yuyuan Liu, Guansong Pang, Fengbei Liu, Yuanhong Chen, Gustavo Carneiro

However, previous uncertainty approaches that directly associate high uncertainty to anomaly may sometimes lead to incorrect anomaly predictions, and external reconstruction models tend to be too inefficient for real-time self-driving embedded systems.

Ranked #2 on Anomaly Detection on Lost and Found (using extra training data)

Anomaly Detection Segmentation +1

Self-supervised Pseudo Multi-class Pre-training for Unsupervised Anomaly Detection and Segmentation in Medical Images

2 code implementations3 Sep 2021 Yu Tian, Fengbei Liu, Guansong Pang, Yuanhong Chen, Yuyuan Liu, Johan W. Verjans, Rajvinder Singh, Gustavo Carneiro

Pre-training UAD methods with self-supervised learning, based on computer vision techniques, can mitigate this challenge, but they are sub-optimal because they do not explore domain knowledge for designing the pretext tasks, and their contrastive learning losses do not try to cluster the normal training images, which may result in a sparse distribution of normal images that is ineffective for anomaly detection.

Contrastive Learning Data Augmentation +2

NVUM: Non-Volatile Unbiased Memory for Robust Medical Image Classification

1 code implementation6 Mar 2021 Fengbei Liu, Yuanhong Chen, Yu Tian, Yuyuan Liu, Chong Wang, Vasileios Belagiannis, Gustavo Carneiro

In this paper, we propose a new training module called Non-Volatile Unbiased Memory (NVUM), which non-volatility stores running average of model logits for a new regularization loss on noisy multi-label problem.

Image Classification with Label Noise Learning with noisy labels +1

Constrained Contrastive Distribution Learning for Unsupervised Anomaly Detection and Localisation in Medical Images

1 code implementation5 Mar 2021 Yu Tian, Guansong Pang, Fengbei Liu, Yuanhong Chen, Seon Ho Shin, Johan W. Verjans, Rajvinder Singh, Gustavo Carneiro

Unsupervised anomaly detection (UAD) learns one-class classifiers exclusively with normal (i. e., healthy) images to detect any abnormal (i. e., unhealthy) samples that do not conform to the expected normal patterns.

Contrastive Learning Representation Learning +1

Weakly-supervised Video Anomaly Detection with Robust Temporal Feature Magnitude Learning

3 code implementations ICCV 2021 Yu Tian, Guansong Pang, Yuanhong Chen, Rajvinder Singh, Johan W. Verjans, Gustavo Carneiro

To address this issue, we introduce a novel and theoretically sound method, named Robust Temporal Feature Magnitude learning (RTFM), which trains a feature magnitude learning function to effectively recognise the positive instances, substantially improving the robustness of the MIL approach to the negative instances from abnormal videos.

Anomaly Detection In Surveillance Videos Contrastive Learning +2

Deep One-Class Classification via Interpolated Gaussian Descriptor

2 code implementations25 Jan 2021 Yuanhong Chen, Yu Tian, Guansong Pang, Gustavo Carneiro

The adversarial interpolation is enforced to consistently learn a smooth Gaussian descriptor, even when the training data is small or contaminated with anomalous samples.

Ranked #2 on Anomaly Detection on MNIST (using extra training data)

Classification One-Class Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.