no code implementations • NAACL (BioNLP) 2021 • Madhumita Sushil, Simon Suster, Walter Daelemans
We explore whether state-of-the-art BERT models encode sufficient domain knowledge to correctly perform domain-specific inference.
no code implementations • NAACL (BioNLP) 2021 • Madhumita Sushil, Simon Suster, Walter Daelemans
For evaluation of explanations, we create a synthetic sepsis-identification dataset, as well as apply our technique on additional clinical and sentiment analysis datasets.
1 code implementation • 5 Mar 2024 • Brenda Y. Miao, Irene Y. Chen, Christopher YK Williams, Jaysón Davidson, Augusto Garcia-Agundez, Harry Sun, Travis Zack, Atul J. Butte, Madhumita Sushil
In response to gaps in standards and best practices for the development of clinical AI tools identified by US Executive Order 141103 and several emerging national networks for clinical AI evaluation, we begin to formalize some of these guidelines by building on the "Minimum information about clinical artificial intelligence modeling" (MI-CLAIM) checklist.
no code implementations • 25 Jan 2024 • Madhumita Sushil, Travis Zack, Divneet Mandair, Zhiwei Zheng, Ahmed Wali, Yan-Ning Yu, Yuwei Quan, Atul J. Butte
In this study, we explored whether recent LLMs can reduce the need for large-scale data annotations.
1 code implementation • 7 Aug 2023 • Madhumita Sushil, Vanessa E. Kennedy, Divneet Mandair, Brenda Y. Miao, Travis Zack, Atul J. Butte
Both medical care and observational studies in oncology require a thorough understanding of a patient's disease progression and treatment history, often elaborately documented in clinical notes.
no code implementations • 16 Jun 2023 • Shenghuan Sun, Travis Zack, Christopher Y. K. Williams, Atul J. Butte, Madhumita Sushil
Our findings indicate that significant disparities exist among breast cancer patients receiving different types of therapies based on social determinants of health.
1 code implementation • 16 Jan 2023 • Madhumita Sushil, Atul J. Butte, Ewoud Schuit, Maarten van Smeden, Artuur M. Leeuwenberg
Confirmation is needed with better text mining models, ideally on a larger manually labeled dataset.
no code implementations • 2 Dec 2022 • Shenghuan Sun, Travis Zack, Madhumita Sushil, Atul J. Butte
We used word frequency analysis and Latent Dirichlet Allocation (LDA) topic modeling analysis to characterize this corpus and identify potential topics of discussion.
no code implementations • 12 Oct 2022 • Madhumita Sushil, Dana Ludwig, Atul J. Butte, Vivek A. Rudrapatna
We sought to evaluate the impact of using a domain-specific vocabulary and a large clinical training corpus on the performance of these language models in clinical language inference.
2 code implementations • 14 May 2020 • Madhumita Sushil, Simon Šuster, Walter Daelemans
For evaluation of explanations, we create a synthetic sepsis-identification dataset, as well as apply our technique on additional clinical and sentiment analysis datasets.
no code implementations • 16 Oct 2019 • Simon Šuster, Madhumita Sushil, Walter Daelemans
Memory networks have been a popular choice among neural architectures for machine reading comprehension and question answering.
1 code implementation • WS 2018 • Simon {\v{S}}uster, Madhumita Sushil, Walter Daelemans
Recently, segment convolutional neural networks have been proposed for end-to-end relation extraction in the clinical domain, achieving results comparable to or outperforming the approaches with heavy manual feature engineering.
1 code implementation • WS 2018 • Madhumita Sushil, Simon Šuster, Walter Daelemans
We find that the output rule-sets can explain the predictions of a neural network trained for 4-class text classification from the 20 newsgroups dataset to a macro-averaged F-score of 0. 80.
no code implementations • 3 Jul 2018 • Madhumita Sushil, Simon Šuster, Kim Luyckx, Walter Daelemans
We compare the model performance of the feature set constructed from a bag of words to that obtained from medical concepts.
no code implementations • 14 Nov 2017 • Madhumita Sushil, Simon Šuster, Kim Luyckx, Walter Daelemans
To understand and interpret the representations, we explore the best encoded features within the patient representations obtained from the autoencoder model.