no code implementations • 30 Nov 2023 • Chong Wang, Yuanhong Chen, Fengbei Liu, Davis James McCarthy, Helen Frazer, Gustavo Carneiro
Such an approach enables the learning of more powerful prototype representations since each learned prototype will own a measure of variability, which naturally reduces the sparsity given the spread of the distribution around each prototype, and we also integrate a prototype diversity objective function into the GMM optimisation to reduce redundancy.
1 code implementation • 2 Aug 2023 • Fengbei Liu, Chong Wang, Yuanhong Chen, Yuyuan Liu, Gustavo Carneiro
Second, we introduce a new Partial Label Supervision (PLS) for noisy label learning that accounts for both clean label coverage and uncertainty.
Ranked #5 on Learning with noisy labels on CIFAR-10N-Random3
1 code implementation • 6 Apr 2023 • Yuanhong Chen, Yuyuan Liu, Hu Wang, Fengbei Liu, Chong Wang, Helen Frazer, Gustavo Carneiro
We show empirical results that demonstrate the effectiveness of our benchmark.
no code implementations • 31 Jan 2023 • Yuanhong Chen, Yuyuan Liu, Chong Wang, Michael Elliott, Chun Fung Kwok, Carlos Pena-Solorzano, Yu Tian, Fengbei Liu, Helen Frazer, Davis J. McCarthy, Gustavo Carneiro
Given the large size of such datasets, researchers usually face a dilemma with the weakly annotated subset: to not use it or to fully annotate it.
1 code implementation • ICCV 2023 • Chong Wang, Yuyuan Liu, Yuanhong Chen, Fengbei Liu, Yu Tian, Davis J. McCarthy, Helen Frazer, Gustavo Carneiro
Prototypical part network (ProtoPNet) methods have been designed to achieve interpretable classification by associating predictions with a set of training prototypes, which we refer to as trivial prototypes because they are trained to lie far from the classification boundary in the feature space.
Explainable Artificial Intelligence (XAI) Image Classification +1
no code implementations • 1 Jan 2023 • Fengbei Liu, Yuanhong Chen, Chong Wang, Yu Tain, Gustavo Carneiro
Also, the new sample selection is based on multi-view consensus, which uses the label views from training labels and model predictions to divide the training set into clean and noisy for training the multi-class model and to re-label the training samples with multiple top-ranked labels for training the multi-label model.
no code implementations • 26 Sep 2022 • Chong Wang, Yuanhong Chen, Yuyuan Liu, Yu Tian, Fengbei Liu, Davis J. McCarthy, Michael Elliott, Helen Frazer, Gustavo Carneiro
On the other hand, prototype-based models improve interpretability by associating predictions with training image prototypes, but they are less accurate than global models and their prototypes tend to have poor diversity.
1 code implementation • 21 Sep 2022 • Yuanhong Chen, Hu Wang, Chong Wang, Yu Tian, Fengbei Liu, Michael Elliott, Davis J. McCarthy, Helen Frazer, Gustavo Carneiro
When analysing screening mammograms, radiologists can naturally process information across two ipsilateral views of each breast, namely the cranio-caudal (CC) and mediolateral-oblique (MLO) views.
1 code implementation • 28 Mar 2022 • Yuyuan Liu, Yu Tian, Chong Wang, Yuanhong Chen, Fengbei Liu, Vasileios Belagiannis, Gustavo Carneiro
The most successful SSL approaches are based on consistency learning that minimises the distance between model responses obtained from perturbed views of the unlabelled data.
1 code implementation • 23 Mar 2022 • Yu Tian, Guansong Pang, Fengbei Liu, Yuyuan Liu, Chong Wang, Yuanhong Chen, Johan W Verjans, Gustavo Carneiro
Current polyp detection methods from colonoscopy videos use exclusively normal (i. e., healthy) training images, which i) ignore the importance of temporal information in consecutive video frames, and ii) lack knowledge about the polyps.
1 code implementation • 22 Mar 2022 • Yu Tian, Guansong Pang, Yuyuan Liu, Chong Wang, Yuanhong Chen, Fengbei Liu, Rajvinder Singh, Johan W Verjans, Mengyu Wang, Gustavo Carneiro
Our UAD approach, the memory-augmented multi-level cross-attentional masked autoencoder (MemMC-MAE), is a transformer-based approach, consisting of a novel memory-augmented self-attention operator for the encoder and a new multi-level cross-attention operator for the decoder.
2 code implementations • ICCV 2023 • Yuanhong Chen, Fengbei Liu, Hu Wang, Chong Wang, Yu Tian, Yuyuan Liu, Gustavo Carneiro
Deep learning methods have shown outstanding classification accuracy in medical imaging problems, which is largely attributed to the availability of large-scale datasets manually annotated with clean labels.
1 code implementation • CVPR 2022 • Yuyuan Liu, Yu Tian, Yuanhong Chen, Fengbei Liu, Vasileios Belagiannis, Gustavo Carneiro
The accurate prediction by this model allows us to use a challenging combination of network, input data and feature perturbations to improve the consistency learning generalisation, where the feature perturbations consist of a new adversarial perturbation.
1 code implementation • CVPR 2022 • Fengbei Liu, Yu Tian, Yuanhong Chen, Yuyuan Liu, Vasileios Belagiannis, Gustavo Carneiro
Effective semi-supervised learning (SSL) in medical image analysis (MIA) must address two challenges: 1) work effectively on both multi-class (e. g., lesion classification) and multi-label (e. g., multiple-disease diagnosis) problems, and 2) handle imbalanced learning (because of the high variance in disease prevalence).
3 code implementations • 24 Nov 2021 • Yu Tian, Yuyuan Liu, Guansong Pang, Fengbei Liu, Yuanhong Chen, Gustavo Carneiro
However, previous uncertainty approaches that directly associate high uncertainty to anomaly may sometimes lead to incorrect anomaly predictions, and external reconstruction models tend to be too inefficient for real-time self-driving embedded systems.
Ranked #2 on Anomaly Detection on Lost and Found (using extra training data)
2 code implementations • 3 Sep 2021 • Yu Tian, Fengbei Liu, Guansong Pang, Yuanhong Chen, Yuyuan Liu, Johan W. Verjans, Rajvinder Singh, Gustavo Carneiro
Pre-training UAD methods with self-supervised learning, based on computer vision techniques, can mitigate this challenge, but they are sub-optimal because they do not explore domain knowledge for designing the pretext tasks, and their contrastive learning losses do not try to cluster the normal training images, which may result in a sparse distribution of normal images that is ineffective for anomaly detection.
no code implementations • 10 Jun 2021 • Yaqub Jonmohamadi, Shahnewaz Ali, Fengbei Liu, Jonathan Roberts, Ross Crawford, Gustavo Carneiro, Ajay K. Pandey
In this paper, we propose the first 3D semantic mapping system from knee arthroscopy that solves the three challenges above.
1 code implementation • 6 Mar 2021 • Fengbei Liu, Yuanhong Chen, Yu Tian, Yuyuan Liu, Chong Wang, Vasileios Belagiannis, Gustavo Carneiro
In this paper, we propose a new training module called Non-Volatile Unbiased Memory (NVUM), which non-volatility stores running average of model logits for a new regularization loss on noisy multi-label problem.
Image Classification with Label Noise Learning with noisy labels +1
1 code implementation • 5 Mar 2021 • Fengbei Liu, Yu Tian, Filipe R. Cordeiro, Vasileios Belagiannis, Ian Reid, Gustavo Carneiro
In this paper, we propose Self-supervised Mean Teacher for Semi-supervised (S$^2$MTS$^2$) learning that combines self-supervised mean-teacher pre-training with semi-supervised fine-tuning.
1 code implementation • 5 Mar 2021 • Yu Tian, Guansong Pang, Fengbei Liu, Yuanhong Chen, Seon Ho Shin, Johan W. Verjans, Rajvinder Singh, Gustavo Carneiro
Unsupervised anomaly detection (UAD) learns one-class classifiers exclusively with normal (i. e., healthy) images to detect any abnormal (i. e., unhealthy) samples that do not conform to the expected normal patterns.
Ranked #1 on Anomaly Detection on LAG
no code implementations • 5 Jul 2020 • Fengbei Liu, Yaqub Jonmohamadi, Gabriel Maicas, Ajay K. Pandey, Gustavo Carneiro
In this paper, we propose a novel self-supervised monocular depth estimation to regularise the training of the semantic segmentation in knee arthroscopy.