no code implementations • 4 Feb 2025 • Li Wang, Boyan Gao, Yanran Li, Zhao Wang, Xiaosong Yang, David A. Clifton, Jun Xiao
Despite the groundbreaking success of diffusion models in generating high-fidelity images, their latent space remains relatively under-explored, even though it holds significant promise for enabling versatile and interpretable image editing capabilities.
no code implementations • 3 Feb 2025 • Boyan Gao, Bo Zhao, Shreyank N Gowda, Xingrun Xing, Yibo Yang, Timothy Hospedales, David A. Clifton
These issues deteriorate when the datasets are learned via matching the trajectories of networks trained on the real and synthetic datasets with a long horizon inner-loop.
2 code implementations • 9 Jan 2025 • Xiaojie Li, Yibo Yang, Jianlong Wu, David A. Clifton, Yue Yu, Bernard Ghanem, Min Zhang
To this end, we propose Continuous Knowledge-Preserving Decomposition for FSCIL (CKPD-FSCIL), a framework that decomposes a model's weights into two parts: one that compacts existing knowledge (knowledge-sensitive components) and another that carries redundant capacity to accommodate new abilities (redundant-capacity components).
class-incremental learning
Few-Shot Class-Incremental Learning
+1
no code implementations • 24 Dec 2024 • Sihao Liu, Yibo Yang, Xiaojie Li, David A. Clifton, Bernard Ghanem
However, they often overlook the adaptability of the model, limiting the ability to learn generalizable and discriminative features incrementally from online training data.
no code implementations • 5 Dec 2024 • Anshul Thakur, Yichen Huang, Soheila Molaei, Yujiang Wang, David A. Clifton
Shared training approaches, such as multi-task learning (MTL) and gradient-based meta-learning, are widely used in various machine learning applications, but they often suffer from negative transfer, leading to performance degradation in specific tasks.
no code implementations • 5 Aug 2024 • Shreyank N Gowda, Boyan Gao, David A. Clifton
This breakthrough highlights the potential for cross-modality approaches in enhancing the capabilities of AI models, particularly in fields like video emotion analysis where the demand for efficiency and accuracy is constantly rising.
Ranked #7 on
Dynamic Facial Expression Recognition
on FERV39k
no code implementations • 31 Jul 2024 • Shreyank N Gowda, David A. Clifton
The Segment Anything Model (SAM) has achieved remarkable successes in the realm of natural image segmentation, but its deployment in the medical imaging sphere has encountered challenges.
no code implementations • 23 Jul 2024 • Shreyank N Gowda, David A. Clifton
Contemporary medical contrastive learning faces challenges from inconsistent semantics and sample pair morphology, leading to dispersed and converging semantic shifts.
Ranked #89 on
Multi-Label Classification
on CheXpert
no code implementations • 14 Jul 2024 • Omid Rohanian, Mohammadmahdi Nouriborji, Olena Seminog, Rodrigo Furst, Thomas Mendy, Shanthi Levanita, Zaharat Kadri-Alabi, Nusrat Jabin, Daniela Toale, Georgina Humphreys, Emilia Antonio, Adrian Bucher, Alice Norton, David A. Clifton
The release of PPACE and its associated dataset offers valuable resources for researchers in multilabel biomedical document classification and supports advancements in aligning biomedical research with key global health priorities.
1 code implementation • 5 Jul 2024 • Xingrun Xing, Boyan Gao, Zheng Zhang, David A. Clifton, Shitao Xiao, Li Du, Guoqi Li, Jiajun Zhang
In contrast, human brains, which contain approximately 86 billion biological neurons, exhibit significantly greater energy efficiency compared to LLMs with a similar number of parameters.
2 code implementations • 20 Jun 2024 • Rushuang Zhou, Lei Clifton, Zijun Liu, Kannie W. Y. Chan, David A. Clifton, Yuan-Ting Zhang, Yining Dong
It enables a robust adaptation of pre-trained models on downstream datasets with limited supervision and high computational efficiency.
1 code implementation • 13 May 2024 • Vinod Kumar Chauhan, Lei Clifton, Achille Salaün, Huiqi Yvonne Lu, Kim Branson, Patrick Schwab, Gaurav Nigam, David A. Clifton
Specifically, we propose two independent networks(T-Net) and a multitasking network (MT-Net) for addressing SSB, where one network/task identifies the target subpopulation which is representative of the study population and the second makes predictions for the identified subpopulation.
1 code implementation • 25 Apr 2024 • Fenglin Liu, Zheng Li, Hongjian Zhou, Qingyu Yin, Jingfeng Yang, Xianfeng Tang, Chen Luo, Ming Zeng, Haoming Jiang, Yifan Gao, Priyanka Nigam, Sreyashi Nag, Bing Yin, Yining Hua, Xuan Zhou, Omid Rohanian, Anshul Thakur, Lei Clifton, David A. Clifton
The adoption of large language models (LLMs) to assist clinicians has attracted remarkable attention.
no code implementations • 31 Dec 2023 • Omid Rohanian, Mohammadmahdi Nouriborji, David A. Clifton
In this context, our study investigates the potential of instruction tuning for biomedical language processing, applying this technique to two general LLMs of substantial scale.
1 code implementation • 9 Nov 2023 • Hongjian Zhou, Fenglin Liu, Boyang Gu, Xinyu Zou, Jinfa Huang, Jinge Wu, Yiru Li, Sam S. Chen, Peilin Zhou, Junling Liu, Yining Hua, Chengfeng Mao, Chenyu You, Xian Wu, Yefeng Zheng, Lei Clifton, Zheng Li, Jiebo Luo, David A. Clifton
Therefore, this review aims to provide a detailed overview of the development and deployment of LLMs in medicine, including the challenges and opportunities they face.
no code implementations • 2 Sep 2023 • Fengxiang Bie, Yibo Yang, Zhongzhu Zhou, Adam Ghanem, Minjia Zhang, Zhewei Yao, Xiaoxia Wu, Connor Holmes, Pareesa Golnari, David A. Clifton, Yuxiong He, DaCheng Tao, Shuaiwen Leon Song
Text-to-image generation (TTI) refers to the usage of models that could process text input and generate high fidelity images based on text descriptions.
no code implementations • 18 Jun 2023 • Rushuang Zhou, Lei Lu, Zijun Liu, Ting Xiang, Zhen Liang, David A. Clifton, Yining Dong, Yuan-Ting Zhang
However, the label scarcity problem, the co-occurrence of multiple CVDs and the poor performance on unseen datasets greatly hinder the widespread application of deep learning-based models.
1 code implementation • 12 Jun 2023 • Vinod Kumar Chauhan, Jiandong Zhou, Ping Lu, Soheila Molaei, David A. Clifton
They offer a new way to design and train neural networks, and they have the potential to improve the performance of deep learning models on a variety of tasks.
1 code implementation • 25 May 2023 • Vinod Kumar Chauhan, Jiandong Zhou, Ghadeer Ghosheh, Soheila Molaei, David A. Clifton
To tackle this problem, we propose a deep learning framework based on `\textit{soft weight sharing}' to train ITE learners, enabling \textit{dynamic end-to-end} information sharing among treatment groups.
1 code implementation • 5 May 2023 • Anshul Thakur, Tingting Zhu, Vinayak Abrol, Jacob Armstrong, Yujiang Wang, David A. Clifton
Experimental evaluation highlights that models trained on encoded time-series data effectively uphold the information bottleneck principle and hence, exhibit lesser information leakage from trained models.
no code implementations • 5 May 2023 • Yujiang Wang, Anshul Thakur, Mingzhi Dong, Pingchuan Ma, Stavros Petridis, Li Shang, Tingting Zhu, David A. Clifton
The prevalence of artificial intelligence (AI) has envisioned an era of healthcare democratisation that promises every stakeholder a new and better way of life.
2 code implementations • 11 Mar 2023 • Bang Yang, Fenglin Liu, Yuexian Zou, Xian Wu, YaoWei Wang, David A. Clifton
We present the results of extensive experiments on twelve NLG tasks, showing that, without using any labeled downstream pairs for training, ZeroNLG generates high-quality and believable outputs and significantly outperforms existing zero-shot methods.
no code implementations • 28 Feb 2023 • Taha Ceritli, Ghadeer O. Ghosheh, Vinod Kumar Chauhan, Tingting Zhu, Andrew P. Creagh, David A. Clifton
Electronic Health Records (EHRs) contain sensitive patient information, which presents privacy concerns when sharing such data.
1 code implementation • 9 Feb 2023 • Omid Rohanian, Mohammadmahdi Nouriborji, Hannah Jauncey, Samaneh Kouchaki, ISARIC Clinical Characterisation Group, Lei Clifton, Laura Merson, David A. Clifton
To our knowledge, this is the first comprehensive study specifically focused on creating efficient and compact transformers for clinical NLP tasks.
1 code implementation • NeurIPS 2023 • Chenyu You, Weicheng Dai, Yifei Min, Fenglin Liu, David A. Clifton, S Kevin Zhou, Lawrence Hamilton Staib, James S Duncan
For medical image segmentation, contrastive learning is the dominant practice to improve the quality of visual representations by contrasting semantically similar and dissimilar pairs of samples.
4 code implementations • 21 Nov 2022 • Peng Jin, Jinfa Huang, Fenglin Liu, Xian Wu, Shen Ge, Guoli Song, David A. Clifton, Jie Chen
Most video-and-language representation learning approaches employ contrastive learning, e. g., CLIP, to project the video and text features into a common latent space according to the semantic similarities of text-video pairs.
Ranked #2 on
Video Retrieval
on LSMDC
(text-to-video Mean Rank metric)
2 code implementations • 23 Oct 2022 • Fenglin Liu, Bang Yang, Chenyu You, Xian Wu, Shen Ge, Zhangdaihong Liu, Xu sun, Yang Yang, David A. Clifton
We demonstrate the effectiveness of our method in generating patient discharge instructions.
no code implementations • 19 Oct 2022 • Vinod Kumar Chauhan, Soheila Molaei, Marzia Hoque Tania, Anshul Thakur, Tingting Zhu, David A. Clifton
Observational studies have recently received significant attention from the machine learning community due to the increasingly available non-experimental observational data and the limitations of the experimental studies, such as considerable cost, impracticality, small and less representative sample sizes, etc.
1 code implementation • 12 Oct 2022 • Mohammadmahdi Nouriborji, Omid Rohanian, Samaneh Kouchaki, David A. Clifton
Different strategies have been proposed in the literature to alleviate these problems, with the aim to create effective compact models that nearly match the performance of their bloated counterparts with negligible performance losses.
1 code implementation • 27 Sep 2022 • Chenyu You, Weicheng Dai, Fenglin Liu, Yifei Min, Nicha C. Dvornek, Xiaoxiao Li, David A. Clifton, Lawrence Staib, James S. Duncan
Blindly leveraging all pixels in training hence can lead to the data imbalance issues, and cause deteriorated performance; (2) consistency: it remains unclear whether a segmentation model has learned meaningful and yet consistent anatomical features due to the intra-class variations between different anatomical features; and (3) diversity: the intra-slice correlations within the entire dataset have received significantly less attention.
1 code implementation • 7 Sep 2022 • Omid Rohanian, Mohammadmahdi Nouriborji, Samaneh Kouchaki, David A. Clifton
Language models pre-trained on biomedical corpora, such as BioBERT, have recently shown promising results on downstream biomedical tasks.
Ranked #2 on
Named Entity Recognition (NER)
on BC2GM
1 code implementation • 5 Aug 2022 • Vinod Kumar Chauhan, Anshul Thakur, Odhran O'Donoghue, David A. Clifton
COPER uses Perceiver model and the concept of neural ordinary differential equations (ODEs) to learn the continuous time dynamics of patient state, i. e., continuity of input space and continuity of output space.
1 code implementation • 24 Jul 2022 • Taha Ceritli, Andrew P. Creagh, David A. Clifton
A particular challenge for disease progression modeling is the heterogeneity of a disease and its manifestations in the patients.
1 code implementation • 30 Jun 2022 • Xinshao Wang, Yang Hua, Elyor Kodirov, Sankha Subhra Mukherjee, David A. Clifton, Neil M. Robertson
For the issue (2), the effectiveness of ProSelfLC defends entropy minimisation.
no code implementations • 13 Jun 2022 • Peng Xu, Xiatian Zhu, David A. Clifton
Transformer is a promising neural network learner, and has achieved great success in various machine learning tasks.
1 code implementation • 6 Jun 2022 • Hang Yuan, Shing Chan, Andrew P. Creagh, Catherine Tong, Aidan Acquah, David A. Clifton, Aiden Doherty
Advances in deep learning for human activity recognition have been relatively limited due to the lack of large labelled datasets.
1 code implementation • 24 May 2022 • Jenny Yang, Rasheed el-Bouri, Odhran O'Donoghue, Alexander S. Lachapelle, Andrew A. S. Soltan, David A. Clifton
With the rapid growth of memory and computing power, datasets are becoming increasingly complex and imbalanced.
no code implementations • 8 Feb 2022 • Shuhao Cao, Peng Xu, David A. Clifton
"Masked Autoencoders (MAE) Are Scalable Vision Learners" revolutionizes the self-supervised learning method in that it not only achieves the state-of-the-art for image pre-training, but is also a milestone that bridges the gap between visual and linguistic masked autoencoding (BERT-style) pre-trainings.
no code implementations • 4 Jul 2021 • Rasheed el-Bouri, Tingting Zhu, David A. Clifton
In this work, we aim to utilise patient data extracted from multiple hospital data centres to train a machine learning model without sacrificing patient privacy.
no code implementations • 2 Jun 2021 • Ziyun Li, Xinshao Wang, Di Hu, Neil M. Robertson, David A. Clifton, Christoph Meinel, Haojin Yang
Additionally, CMD covers two special cases: zero-knowledge and all knowledge, leading to a unified MKD framework.
no code implementations • 1 Jan 2021 • Dani Kiyasseh, Tingting Zhu, David A. Clifton
The ubiquity and rate of collection of physiological signals produce large, unlabelled datasets.
no code implementations • 28 Nov 2020 • Dani Kiyasseh, Tingting Zhu, David A. Clifton
Many clinical deep learning algorithms are population-based and difficult to interpret.
no code implementations • NeurIPS 2021 • Dani Kiyasseh, Tingting Zhu, David A. Clifton
The process of manually searching for relevant instances in, and extracting information from, clinical databases underpin a multitude of clinical tasks.
no code implementations • 28 Sep 2020 • Dani Kiyasseh, Tingting Zhu, David A. Clifton
The ongoing digitization of health records within the healthcare industry results in large-scale datasets.
2 code implementations • 27 May 2020 • Dani Kiyasseh, Tingting Zhu, David A. Clifton
This data can be exploited via contrastive learning, a self-supervised pre-training method that encourages representations of instances to be similar to one another.
5 code implementations • CVPR 2021 • Xinshao Wang, Yang Hua, Elyor Kodirov, David A. Clifton, Neil M. Robertson
Keywords: entropy minimisation, maximum entropy, confidence penalty, self knowledge distillation, label correction, label noise, semi-supervised learning, output regularisation
no code implementations • 22 Apr 2020 • Dani Kiyasseh, Tingting Zhu, David A. Clifton
Large sets of unlabelled data within the healthcare domain remain underutilized.
no code implementations • 20 Apr 2020 • Dani Kiyasseh, Tingting Zhu, David A. Clifton
Deep learning algorithms are known to experience destructive interference when instances violate the assumption of being independent and identically distributed (i. i. d).
2 code implementations • 20 Apr 2020 • Dani Kiyasseh, Tingting Zhu, David A. Clifton
One way to mitigate this burden is via active learning (AL) which involves the (a) acquisition and (b) annotation of informative unlabelled instances.
no code implementations • 10 Dec 2019 • Girmaw Abebe Tadesse, Tingting Zhu, Nhan Le Nguyen Thanh, Nguyen Thanh Hung, Ha Thi Hai Duong, Truong Huu Khanh, Pham Van Quang, Duc Duong Tran, LamMinh Yen, H Rogier Van Doorn, Nguyen Van Hao, John Prince, Hamza Javed, DaniKiyasseh, Le Van Tan, Louise Thwaites, David A. Clifton
A support vector machine is employed to classify the ANSD levels.
no code implementations • 1 Dec 2019 • Pulkit Sharma, Farah E. Shamout, David A. Clifton
Machine learning models can be used for pattern recognition in medical data in order to improve patient outcomes, such as the prediction of in-hospital mortality.
no code implementations • 23 Mar 2015 • Tingting Zhu, Nic Dunkley, Joachim Behar, David A. Clifton, Gari. D. Clifford
To address these problems, a Bayesian Continuous-valued Label Aggregator(BCLA) is proposed to provide a reliable estimation of label aggregation while accurately infer the precision and bias of each algorithm.