Search Results for author: David A. Clifton

Found 52 papers, 25 papers with code

Exploring the latent space of diffusion models directly through singular value decomposition

no code implementations4 Feb 2025 Li Wang, Boyan Gao, Yanran Li, Zhao Wang, Xiaosong Yang, David A. Clifton, Jun Xiao

Despite the groundbreaking success of diffusion models in generating high-fidelity images, their latent space remains relatively under-explored, even though it holds significant promise for enabling versatile and interpretable image editing capabilities.

Denoising

Enhancing Generalization via Sharpness-Aware Trajectory Matching for Dataset Condensation

no code implementations3 Feb 2025 Boyan Gao, Bo Zhao, Shreyank N Gowda, Xingrun Xing, Yibo Yang, Timothy Hospedales, David A. Clifton

These issues deteriorate when the datasets are learned via matching the trajectories of networks trained on the real and synthetic datasets with a long horizon inner-loop.

Bilevel Optimization Dataset Condensation

Continuous Knowledge-Preserving Decomposition for Few-Shot Continual Learning

2 code implementations9 Jan 2025 Xiaojie Li, Yibo Yang, Jianlong Wu, David A. Clifton, Yue Yu, Bernard Ghanem, Min Zhang

To this end, we propose Continuous Knowledge-Preserving Decomposition for FSCIL (CKPD-FSCIL), a framework that decomposes a model's weights into two parts: one that compacts existing knowledge (knowledge-sensitive components) and another that carries redundant capacity to accommodate new abilities (redundant-capacity components).

class-incremental learning Few-Shot Class-Incremental Learning +1

Enhancing Online Continual Learning with Plug-and-Play State Space Model and Class-Conditional Mixture of Discretization

no code implementations24 Dec 2024 Sihao Liu, Yibo Yang, Xiaojie Li, David A. Clifton, Bernard Ghanem

However, they often overlook the adaptability of the model, limiting the ability to learn generalizable and discriminative features incrementally from online training data.

Continual Learning

Efficient Task Grouping Through Samplewise Optimisation Landscape Analysis

no code implementations5 Dec 2024 Anshul Thakur, Yichen Huang, Soheila Molaei, Yujiang Wang, David A. Clifton

Shared training approaches, such as multi-task learning (MTL) and gradient-based meta-learning, are widely used in various machine learning applications, but they often suffer from negative transfer, leading to performance degradation in specific tasks.

Meta-Learning Multi-Task Learning

FE-Adapter: Adapting Image-based Emotion Classifiers to Videos

no code implementations5 Aug 2024 Shreyank N Gowda, Boyan Gao, David A. Clifton

This breakthrough highlights the potential for cross-modality approaches in enhancing the capabilities of AI models, particularly in fields like video emotion analysis where the demand for efficiency and accuracy is constantly rising.

Dynamic Facial Expression Recognition Transfer Learning +2

CC-SAM: SAM with Cross-feature Attention and Context for Ultrasound Image Segmentation

no code implementations31 Jul 2024 Shreyank N Gowda, David A. Clifton

The Segment Anything Model (SAM) has achieved remarkable successes in the realm of natural image segmentation, but its deployment in the medical imaging sphere has encountered challenges.

Image Segmentation Natural Language Understanding +2

Masks and Manuscripts: Advancing Medical Pre-training with End-to-End Masking and Narrative Structuring

no code implementations23 Jul 2024 Shreyank N Gowda, David A. Clifton

Contemporary medical contrastive learning faces challenges from inconsistent semantics and sample pair morphology, leading to dispersed and converging semantic shifts.

Contrastive Learning Medical Image Analysis +3

Rapid Biomedical Research Classification: The Pandemic PACT Advanced Categorisation Engine

no code implementations14 Jul 2024 Omid Rohanian, Mohammadmahdi Nouriborji, Olena Seminog, Rodrigo Furst, Thomas Mendy, Shanthi Levanita, Zaharat Kadri-Alabi, Nusrat Jabin, Daniela Toale, Georgina Humphreys, Emilia Antonio, Adrian Bucher, Alice Norton, David A. Clifton

The release of PPACE and its associated dataset offers valuable resources for researchers in multilabel biomedical document classification and supports advancements in aligning biomedical research with key global health priorities.

Decision Making Document Classification +2

SpikeLLM: Scaling up Spiking Neural Network to Large Language Models via Saliency-based Spiking

1 code implementation5 Jul 2024 Xingrun Xing, Boyan Gao, Zheng Zhang, David A. Clifton, Shitao Xiao, Li Du, Guoqi Li, Jiajun Zhang

In contrast, human brains, which contain approximately 86 billion biological neurons, exhibit significantly greater energy efficiency compared to LLMs with a similar number of parameters.

Language Modelling Large Language Model +1

Sample Selection Bias in Machine Learning for Healthcare

1 code implementation13 May 2024 Vinod Kumar Chauhan, Lei Clifton, Achille Salaün, Huiqi Yvonne Lu, Kim Branson, Patrick Schwab, Gaurav Nigam, David A. Clifton

Specifically, we propose two independent networks(T-Net) and a multitasking network (MT-Net) for addressing SSB, where one network/task identifies the target subpopulation which is representative of the study population and the second makes predictions for the identified subpopulation.

Selection bias

Exploring the Effectiveness of Instruction Tuning in Biomedical Language Processing

no code implementations31 Dec 2023 Omid Rohanian, Mohammadmahdi Nouriborji, David A. Clifton

In this context, our study investigates the potential of instruction tuning for biomedical language processing, applying this technique to two general LLMs of substantial scale.

named-entity-recognition Named Entity Recognition +3

A Survey of Large Language Models in Medicine: Progress, Application, and Challenge

1 code implementation9 Nov 2023 Hongjian Zhou, Fenglin Liu, Boyang Gu, Xinyu Zou, Jinfa Huang, Jinge Wu, Yiru Li, Sam S. Chen, Peilin Zhou, Junling Liu, Yining Hua, Chengfeng Mao, Chenyu You, Xian Wu, Yefeng Zheng, Lei Clifton, Zheng Li, Jiebo Luo, David A. Clifton

Therefore, this review aims to provide a detailed overview of the development and deployment of LLMs in medicine, including the challenges and opportunities they face.

Semi-Supervised Learning for Multi-Label Cardiovascular Diseases Prediction:A Multi-Dataset Study

no code implementations18 Jun 2023 Rushuang Zhou, Lei Lu, Zijun Liu, Ting Xiang, Zhen Liang, David A. Clifton, Yining Dong, Yuan-Ting Zhang

However, the label scarcity problem, the co-occurrence of multiple CVDs and the poor performance on unseen datasets greatly hinder the widespread application of deep learning-based models.

Data Augmentation Electrocardiography (ECG) +2

A Brief Review of Hypernetworks in Deep Learning

1 code implementation12 Jun 2023 Vinod Kumar Chauhan, Jiandong Zhou, Ping Lu, Soheila Molaei, David A. Clifton

They offer a new way to design and train neural networks, and they have the potential to improve the performance of deep learning models on a variety of tasks.

Causal Inference Continual Learning +5

Dynamic Inter-treatment Information Sharing for Individualized Treatment Effects Estimation

1 code implementation25 May 2023 Vinod Kumar Chauhan, Jiandong Zhou, Ghadeer Ghosheh, Soheila Molaei, David A. Clifton

To tackle this problem, we propose a deep learning framework based on `\textit{soft weight sharing}' to train ITE learners, enabling \textit{dynamic end-to-end} information sharing among treatment groups.

counterfactual Counterfactual Inference

Data Encoding For Healthcare Data Democratisation and Information Leakage Prevention

1 code implementation5 May 2023 Anshul Thakur, Tingting Zhu, Vinayak Abrol, Jacob Armstrong, Yujiang Wang, David A. Clifton

Experimental evaluation highlights that models trained on encoded time-series data effectively uphold the information bottleneck principle and hence, exhibit lesser information leakage from trained models.

Deep Learning Time Series

Medical records condensation: a roadmap towards healthcare data democratisation

no code implementations5 May 2023 Yujiang Wang, Anshul Thakur, Mingzhi Dong, Pingchuan Ma, Stavros Petridis, Li Shang, Tingting Zhu, David A. Clifton

The prevalence of artificial intelligence (AI) has envisioned an era of healthcare democratisation that promises every stakeholder a new and better way of life.

Clinical Knowledge Dataset Condensation +2

ZeroNLG: Aligning and Autoencoding Domains for Zero-Shot Multimodal and Multilingual Natural Language Generation

2 code implementations11 Mar 2023 Bang Yang, Fenglin Liu, Yuexian Zou, Xian Wu, YaoWei Wang, David A. Clifton

We present the results of extensive experiments on twelve NLG tasks, showing that, without using any labeled downstream pairs for training, ZeroNLG generates high-quality and believable outputs and significantly outperforms existing zero-shot methods.

Image Captioning Image to text +6

Rethinking Semi-Supervised Medical Image Segmentation: A Variance-Reduction Perspective

1 code implementation NeurIPS 2023 Chenyu You, Weicheng Dai, Yifei Min, Fenglin Liu, David A. Clifton, S Kevin Zhou, Lawrence Hamilton Staib, James S Duncan

For medical image segmentation, contrastive learning is the dominant practice to improve the quality of visual representations by contrasting semantically similar and dissimilar pairs of samples.

Contrastive Learning Image Segmentation +3

Expectation-Maximization Contrastive Learning for Compact Video-and-Language Representations

4 code implementations21 Nov 2022 Peng Jin, Jinfa Huang, Fenglin Liu, Xian Wu, Shen Ge, Guoli Song, David A. Clifton, Jie Chen

Most video-and-language representation learning approaches employ contrastive learning, e. g., CLIP, to project the video and text features into a common latent space according to the semantic similarities of text-video pairs.

Ranked #2 on Video Retrieval on LSMDC (text-to-video Mean Rank metric)

Contrastive Learning Representation Learning +5

Adversarial De-confounding in Individualised Treatment Effects Estimation

no code implementations19 Oct 2022 Vinod Kumar Chauhan, Soheila Molaei, Marzia Hoque Tania, Anshul Thakur, Tingting Zhu, David A. Clifton

Observational studies have recently received significant attention from the machine learning community due to the increasingly available non-experimental observational data and the limitations of the experimental studies, such as considerable cost, impracticality, small and less representative sample sizes, etc.

counterfactual Counterfactual Inference

MiniALBERT: Model Distillation via Parameter-Efficient Recursive Transformers

1 code implementation12 Oct 2022 Mohammadmahdi Nouriborji, Omid Rohanian, Samaneh Kouchaki, David A. Clifton

Different strategies have been proposed in the literature to alleviate these problems, with the aim to create effective compact models that nearly match the performance of their bloated counterparts with negligible performance losses.

model

Mine yOur owN Anatomy: Revisiting Medical Image Segmentation with Extremely Limited Labels

1 code implementation27 Sep 2022 Chenyu You, Weicheng Dai, Fenglin Liu, Yifei Min, Nicha C. Dvornek, Xiaoxiao Li, David A. Clifton, Lawrence Staib, James S. Duncan

Blindly leveraging all pixels in training hence can lead to the data imbalance issues, and cause deteriorated performance; (2) consistency: it remains unclear whether a segmentation model has learned meaningful and yet consistent anatomical features due to the intra-class variations between different anatomical features; and (3) diversity: the intra-slice correlations within the entire dataset have received significantly less attention.

Anatomy Contrastive Learning +4

On the Effectiveness of Compact Biomedical Transformers

1 code implementation7 Sep 2022 Omid Rohanian, Mohammadmahdi Nouriborji, Samaneh Kouchaki, David A. Clifton

Language models pre-trained on biomedical corpora, such as BioBERT, have recently shown promising results on downstream biomedical tasks.

Continual Learning Knowledge Distillation +1

COPER: Continuous Patient State Perceiver

1 code implementation5 Aug 2022 Vinod Kumar Chauhan, Anshul Thakur, Odhran O'Donoghue, David A. Clifton

COPER uses Perceiver model and the concept of neural ordinary differential equations (ODEs) to learn the continuous time dynamics of patient state, i. e., continuity of input space and continuity of output space.

Irregular Time Series Mortality Prediction +2

Mixture of Input-Output Hidden Markov Models for Heterogeneous Disease Progression Modeling

1 code implementation24 Jul 2022 Taha Ceritli, Andrew P. Creagh, David A. Clifton

A particular challenge for disease progression modeling is the heterogeneity of a disease and its manifestations in the patients.

Time Series Time Series Analysis

Multimodal Learning with Transformers: A Survey

no code implementations13 Jun 2022 Peng Xu, Xiatian Zhu, David A. Clifton

Transformer is a promising neural network learner, and has achieved great success in various machine learning tasks.

Survey

How to Understand Masked Autoencoders

no code implementations8 Feb 2022 Shuhao Cao, Peng Xu, David A. Clifton

"Masked Autoencoders (MAE) Are Scalable Vision Learners" revolutionizes the self-supervised learning method in that it not only achieves the state-of-the-art for image pre-training, but is also a milestone that bridges the gap between visual and linguistic masked autoencoding (BERT-style) pre-trainings.

Self-Supervised Learning

Towards Scheduling Federated Deep Learning using Meta-Gradients for Inter-Hospital Learning

no code implementations4 Jul 2021 Rasheed el-Bouri, Tingting Zhu, David A. Clifton

In this work, we aim to utilise patient data extracted from multiple hospital data centres to train a machine learning model without sacrificing patient privacy.

Federated Learning Scheduling +1

PCPs: Patient Cardiac Prototypes

no code implementations28 Nov 2020 Dani Kiyasseh, Tingting Zhu, David A. Clifton

Many clinical deep learning algorithms are population-based and difficult to interpret.

Contrastive Learning Dataset Distillation

CROCS: Clustering and Retrieval of Cardiac Signals Based on Patient Disease Class, Sex, and Age

no code implementations NeurIPS 2021 Dani Kiyasseh, Tingting Zhu, David A. Clifton

The process of manually searching for relevant instances in, and extracting information from, clinical databases underpin a multitude of clinical tasks.

Clustering Contrastive Learning +2

DROPS: Deep Retrieval of Physiological Signals via Attribute-specific Clinical Prototypes

no code implementations28 Sep 2020 Dani Kiyasseh, Tingting Zhu, David A. Clifton

The ongoing digitization of health records within the healthcare industry results in large-scale datasets.

Attribute Clustering +3

CLOCS: Contrastive Learning of Cardiac Signals Across Space, Time, and Patients

2 code implementations27 May 2020 Dani Kiyasseh, Tingting Zhu, David A. Clifton

This data can be exploited via contrastive learning, a self-supervised pre-training method that encourages representations of instances to be similar to one another.

Contrastive Learning Linear evaluation

ProSelfLC: Progressive Self Label Correction for Training Robust Deep Neural Networks

5 code implementations CVPR 2021 Xinshao Wang, Yang Hua, Elyor Kodirov, David A. Clifton, Neil M. Robertson

Keywords: entropy minimisation, maximum entropy, confidence penalty, self knowledge distillation, label correction, label noise, semi-supervised learning, output regularisation

Self-Knowledge Distillation

SoQal: Selective Oracle Questioning in Active Learning

no code implementations22 Apr 2020 Dani Kiyasseh, Tingting Zhu, David A. Clifton

Large sets of unlabelled data within the healthcare domain remain underutilized.

Active Learning

CLOPS: Continual Learning of Physiological Signals

no code implementations20 Apr 2020 Dani Kiyasseh, Tingting Zhu, David A. Clifton

Deep learning algorithms are known to experience destructive interference when instances violate the assumption of being independent and identically distributed (i. i. d).

Continual Learning Multi-Task Learning

SoQal: Selective Oracle Questioning for Consistency Based Active Learning of Cardiac Signals

2 code implementations20 Apr 2020 Dani Kiyasseh, Tingting Zhu, David A. Clifton

One way to mitigate this burden is via active learning (AL) which involves the (a) acquisition and (b) annotation of informative unlabelled instances.

Active Learning Pseudo Label +1

Preserving Patient Privacy while Training a Predictive Model of In-hospital Mortality

no code implementations1 Dec 2019 Pulkit Sharma, Farah E. Shamout, David A. Clifton

Machine learning models can be used for pattern recognition in medical data in order to improve patient outcomes, such as the prediction of in-hospital mortality.

Federated Learning

Fusing Continuous-valued Medical Labels using a Bayesian Model

no code implementations23 Mar 2015 Tingting Zhu, Nic Dunkley, Joachim Behar, David A. Clifton, Gari. D. Clifford

To address these problems, a Bayesian Continuous-valued Label Aggregator(BCLA) is proposed to provide a reliable estimation of label aggregation while accurately infer the precision and bias of each algorithm.

Time Series Time Series Analysis

Cannot find the paper you are looking for? You can Submit a new open access paper.