Search Results for author: Mingxuan Liu

Found 16 papers, 8 papers with code

Fairness-Aware Interpretable Modeling (FAIM) for Trustworthy Machine Learning in Healthcare

no code implementations8 Mar 2024 Mingxuan Liu, Yilin Ning, Yuhe Ke, Yuqing Shang, Bibhas Chakraborty, Marcus Eng Hock Ong, Roger Vaughan, Nan Liu

The escalating integration of machine learning in high-stakes fields such as healthcare raises substantial concerns about model fairness.

Fairness

Spiking-PhysFormer: Camera-Based Remote Photoplethysmography with Parallel Spike-driven Transformer

no code implementations7 Feb 2024 Mingxuan Liu, Jiankai Tang, Haoxiang Li, Jiahao Qi, Siwei Li, Kegang Wang, Yuntao Wang, Hong Chen

Additionally, the power consumption of the transformer block is reduced by a factor of 12. 2, while maintaining decent performance as PhysFormer and other ANN-based models.

Democratizing Fine-grained Visual Recognition with Large Language Models

no code implementations24 Jan 2024 Mingxuan Liu, Subhankar Roy, Wenjing Li, Zhun Zhong, Nicu Sebe, Elisa Ricci

Identifying subordinate-level categories from images is a longstanding task in computer vision and is referred to as fine-grained visual recognition (FGVR).

Fine-Grained Visual Recognition World Knowledge

Generative Artificial Intelligence in Healthcare: Ethical Considerations and Assessment Checklist

2 code implementations2 Nov 2023 Yilin Ning, Salinelat Teixayavong, Yuqing Shang, Julian Savulescu, Vaishaanth Nagaraj, Di Miao, Mayli Mertens, Daniel Shu Wei Ting, Jasmine Chiat Ling Ong, Mingxuan Liu, Jiuwen Cao, Michael Dunn, Roger Vaughan, Marcus Eng Hock Ong, Joseph Jao-Yiu Sung, Eric J Topol, Nan Liu

The widespread use of ChatGPT and other emerging technology powered by generative artificial intelligence (GenAI) has drawn much attention to potential ethical issues, especially in high-stakes applications such as healthcare, but ethical discussions are yet to translate into operationalisable solutions.

Ethics

SAM-Deblur: Let Segment Anything Boost Image Deblurring

1 code implementation5 Sep 2023 Siwei Li, Mingxuan Liu, Yating Zhang, Shu Chen, Haoxiang Li, Zifei Dou, Hong Chen

Image deblurring is a critical task in the field of image restoration, aiming to eliminate blurring artifacts.

Deblurring Image Deblurring +1

Spiking-Diffusion: Vector Quantized Discrete Diffusion Model with Spiking Neural Networks

1 code implementation20 Aug 2023 Mingxuan Liu, Jie Gan, Rui Wen, Tao Li, Yongli Chen, Hong Chen

To fill the gap, we propose a Spiking-Diffusion model, which is based on the vector quantized discrete diffusion model.

Image Generation

Towards clinical AI fairness: A translational perspective

no code implementations26 Apr 2023 Mingxuan Liu, Yilin Ning, Salinelat Teixayavong, Mayli Mertens, Jie Xu, Daniel Shu Wei Ting, Lionel Tim-Ee Cheng, Jasmine Chiat Ling Ong, Zhen Ling Teo, Ting Fang Tan, Ravi Chandran Narrendar, Fei Wang, Leo Anthony Celi, Marcus Eng Hock Ong, Nan Liu

In this paper, we discuss the misalignment between technical and clinical perspectives of AI fairness, highlight the barriers to AI fairness' translation to healthcare, advocate multidisciplinary collaboration to bridge the knowledge gap, and provide possible solutions to address the clinical concerns pertaining to AI fairness.

Fairness Translation

Large-scale Pre-trained Models are Surprisingly Strong in Incremental Novel Class Discovery

1 code implementation28 Mar 2023 Mingxuan Liu, Subhankar Roy, Zhun Zhong, Nicu Sebe, Elisa Ricci

Discovering novel concepts from unlabelled data and in a continuous manner is an important desideratum of lifelong learners.

Novel Class Discovery Novel Concepts

FedScore: A privacy-preserving framework for federated scoring system development

1 code implementation1 Mar 2023 Siqi Li, Yilin Ning, Marcus Eng Hock Ong, Bibhas Chakraborty, Chuan Hong, Feng Xie, Han Yuan, Mingxuan Liu, Daniel M. Buckland, Yong Chen, Nan Liu

We also calculated the average AUC values and SDs for each local model, and the FedScore model showed promising accuracy and stability with a high average AUC value which was closest to the one of the pooled model and SD which was lower than that of most local models.

Federated Learning Model Selection +2

Shapley variable importance cloud for machine learning models

no code implementations16 Dec 2022 Yilin Ning, Mingxuan Liu, Nan Liu

Current practice in interpretable machine learning often focuses on explaining the final model trained from data, e. g., by using the Shapley additive explanations (SHAP) method.

Interpretable Machine Learning regression

Class-incremental Novel Class Discovery

1 code implementation18 Jul 2022 Subhankar Roy, Mingxuan Liu, Zhun Zhong, Nicu Sebe, Elisa Ricci

We study the new task of class-incremental Novel Class Discovery (class-iNCD), which refers to the problem of discovering novel categories in an unlabelled data set by leveraging a pre-trained model that has been trained on a labelled data set containing disjoint yet related categories.

Incremental Learning Knowledge Distillation +1

Balanced background and explanation data are needed in explaining deep learning models with SHAP: An empirical study on clinical decision making

1 code implementation8 Jun 2022 Mingxuan Liu, Yilin Ning, Han Yuan, Marcus Eng Hock Ong, Nan Liu

This study sought to investigate the effects of data imbalance on SHAP explanations for deep learning models, and to propose a strategy to mitigate these effects.

Decision Making

An empirical study of the effect of background data size on the stability of SHapley Additive exPlanations (SHAP) for deep learning models

no code implementations24 Apr 2022 Han Yuan, Mingxuan Liu, Lican Kang, Chenkui Miao, Ying Wu

In our empirical study on the MIMIC-III dataset, we show that the two core explanations - SHAP values and variable rankings fluctuate when using different background datasets acquired from random sampling, indicating that users cannot unquestioningly trust the one-shot interpretation from SHAP.

Cannot find the paper you are looking for? You can Submit a new open access paper.