Search Results for author: Yilin Ning

Found 18 papers, 10 papers with code

Fairness-Aware Interpretable Modeling (FAIM) for Trustworthy Machine Learning in Healthcare

no code implementations8 Mar 2024 Mingxuan Liu, Yilin Ning, Yuhe Ke, Yuqing Shang, Bibhas Chakraborty, Marcus Eng Hock Ong, Roger Vaughan, Nan Liu

The escalating integration of machine learning in high-stakes fields such as healthcare raises substantial concerns about model fairness.

Fairness

Generative Artificial Intelligence in Healthcare: Ethical Considerations and Assessment Checklist

2 code implementations2 Nov 2023 Yilin Ning, Salinelat Teixayavong, Yuqing Shang, Julian Savulescu, Vaishaanth Nagaraj, Di Miao, Mayli Mertens, Daniel Shu Wei Ting, Jasmine Chiat Ling Ong, Mingxuan Liu, Jiuwen Cao, Michael Dunn, Roger Vaughan, Marcus Eng Hock Ong, Joseph Jao-Yiu Sung, Eric J Topol, Nan Liu

The widespread use of ChatGPT and other emerging technology powered by generative artificial intelligence (GenAI) has drawn much attention to potential ethical issues, especially in high-stakes applications such as healthcare, but ethical discussions are yet to translate into operationalisable solutions.

Ethics

Towards clinical AI fairness: A translational perspective

no code implementations26 Apr 2023 Mingxuan Liu, Yilin Ning, Salinelat Teixayavong, Mayli Mertens, Jie Xu, Daniel Shu Wei Ting, Lionel Tim-Ee Cheng, Jasmine Chiat Ling Ong, Zhen Ling Teo, Ting Fang Tan, Ravi Chandran Narrendar, Fei Wang, Leo Anthony Celi, Marcus Eng Hock Ong, Nan Liu

In this paper, we discuss the misalignment between technical and clinical perspectives of AI fairness, highlight the barriers to AI fairness' translation to healthcare, advocate multidisciplinary collaboration to bridge the knowledge gap, and provide possible solutions to address the clinical concerns pertaining to AI fairness.

Fairness Translation

A roadmap to fair and trustworthy prediction model validation in healthcare

no code implementations7 Apr 2023 Yilin Ning, Victor Volovici, Marcus Eng Hock Ong, Benjamin Alan Goldstein, Nan Liu

A prediction model is most useful if it generalizes beyond the development data with external validations, but to what extent should it generalize remains unclear.

FedScore: A privacy-preserving framework for federated scoring system development

1 code implementation1 Mar 2023 Siqi Li, Yilin Ning, Marcus Eng Hock Ong, Bibhas Chakraborty, Chuan Hong, Feng Xie, Han Yuan, Mingxuan Liu, Daniel M. Buckland, Yong Chen, Nan Liu

We also calculated the average AUC values and SDs for each local model, and the FedScore model showed promising accuracy and stability with a high average AUC value which was closest to the one of the pooled model and SD which was lower than that of most local models.

Federated Learning Model Selection +2

Shapley variable importance cloud for machine learning models

no code implementations16 Dec 2022 Yilin Ning, Mingxuan Liu, Nan Liu

Current practice in interpretable machine learning often focuses on explaining the final model trained from data, e. g., by using the Shapley additive explanations (SHAP) method.

Interpretable Machine Learning regression

Balanced background and explanation data are needed in explaining deep learning models with SHAP: An empirical study on clinical decision making

1 code implementation8 Jun 2022 Mingxuan Liu, Yilin Ning, Han Yuan, Marcus Eng Hock Ong, Nan Liu

This study sought to investigate the effects of data imbalance on SHAP explanations for deep learning models, and to propose a strategy to mitigate these effects.

Decision Making

AutoScore-Imbalance: An interpretable machine learning tool for development of clinical scores with rare events data

1 code implementation13 Jul 2021 Han Yuan, Feng Xie, Marcus Eng Hock Ong, Yilin Ning, Marcel Lucas Chee, Seyed Ehsan Saffari, Hairil Rizal Abdullah, Benjamin Alan Goldstein, Bibhas Chakraborty, Nan Liu

All scoring models were evaluated on the basis of their area under the curve (AUC) in the receiver operating characteristic analysis and balanced accuracy (i. e., mean value of sensitivity and specificity).

Decision Making Interpretable Machine Learning +1

AutoScore-Survival: Developing interpretable machine learning-based time-to-event scores with right-censored survival data

1 code implementation13 Jun 2021 Feng Xie, Yilin Ning, Han Yuan, Benjamin Alan Goldstein, Marcus Eng Hock Ong, Nan Liu, Bibhas Chakraborty

We illustrated our method in a real-life study of 90-day mortality of patients in intensive care units and compared its performance with survival models (i. e., Cox) and the random survival forest.

BIG-bench Machine Learning Interpretable Machine Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.