1 code implementation • 21 May 2024 • Mohammad Azizmalayeri, Ameen Abu-Hanna, Giovanni Cinà
Detecting out-of-distribution (OOD) instances is crucial for the reliable deployment of machine learning models in real-world scenarios.
1 code implementation • 15 Nov 2023 • Izak Yasrebi-de Kom, Joanna Klopotowska, Dave Dongelmans, Nicolette De Keizer, Kitty Jager, Ameen Abu-Hanna, Giovanni Cinà
Here, we pioneer a causal modeling approach using observational data to estimate a lower bound of the PC (PC$_{low}$).
no code implementations • 3 Jul 2023 • Giovanni Cinà, Daniel Fernandez-Llaneza, Ludovico Deponte, Nishant Mishra, Tabea E. Röber, Sandro Pezzelle, Iacer Calixto, Rob Goedhart, Ş. İlker Birbil
Feature attribution methods have become a staple method to disentangle the complex behavior of black box models.
no code implementations • 5 Jan 2023 • Giovanni Cinà, Tabea E. Röber, Rob Goedhart, Ş. İlker Birbil
Despite valid concerns, we argue that existing criticism on the viability of post-hoc local explainability methods throws away the baby with the bathwater by generalizing a problem that is specific to image data.
Explainable Artificial Intelligence (XAI) Feature Importance +1
no code implementations • 30 Jun 2022 • Giovanni Cinà, Tabea Röber, Rob Goedhart, Ilker Birbil
The recent spike in certified Artificial Intelligence (AI) tools for healthcare has renewed the debate around adoption of this technology.
1 code implementation • 30 Sep 2021 • Karina Zadorozhny, Patrick Thoral, Paul Elbers, Giovanni Cinà
Detection of Out-of-Distribution (OOD) samples in real time is a crucial safety check for deployment of machine learning models in the medical field.
1 code implementation • 14 Sep 2021 • Adam Izdebski, Patrick J. Thoral, Robbert C. A. Lalisang, Dean M. McHugh, Diederik Gommers, Olaf L. Cremer, Rob J. Bosman, Sander Rigter, Evert-Jan Wils, Tim Frenzel, Dave A. Dongelmans, Remko de Jong, Marco A. A. Peters, Marlijn J. A Kamps, Dharmanand Ramnarain, Ralph Nowitzky, Fleur G. C. A. Nooteboom, Wouter de Ruijter, Louise C. Urlings-Strop, Ellen G. M. Smit, D. Jannet Mehagnoul-Schipper, Tom Dormans, Cornelis P. C. de Jager, Stefaan H. A. Hendriks, Sefanja Achterberg, Evelien Oostdijk, Auke C. Reidinga, Barbara Festen-Spanjer, Gert B. Brunnekreef, Alexander D. Cornet, Walter van den Tempel, Age D. Boelens, Peter Koetsier, Judith Lens, Harald J. Faber, A. Karakus, Robert Entjes, Paul de Jong, Thijs C. D. Rettig, Sesmu Arbous, Lucas M. Fleuren, Tariq A. Dam, Michele Tonutti, Daan P. de Bruin, Paul W. G. Elbers, Giovanni Cinà
Despite the recent progress in the field of causal inference, to date there is no agreed upon methodology to glean treatment effect estimation from observational data.
1 code implementation • 9 Dec 2020 • Dennis Ulmer, Giovanni Cinà
A crucial requirement for reliable deployment of deep learning models for safety-critical applications is the ability to identify out-of-distribution (OOD) data points, samples which differ from the training data and on which a model might underperform.
1 code implementation • 6 Nov 2020 • Dennis Ulmer, Lotta Meijerink, Giovanni Cinà
When deploying machine learning models in high-stakes real-world environments such as health care, it is crucial to accurately assess the uncertainty concerning a model's prediction on abnormal inputs.
2 code implementations • 13 Apr 2020 • Lotta Meijerink, Giovanni Cinà, Michele Tonutti
In a data-scarce field such as healthcare, where models often deliver predictions on patients with rare conditions, the ability to measure the uncertainty of a model's prediction could potentially lead to improved effectiveness of decision support tools and increased user trust.
1 code implementation • 20 Jun 2019 • David Ruhe, Giovanni Cinà, Michele Tonutti, Daan de Bruin, Paul Elbers
In this work we show how Bayesian modelling and the predictive uncertainty that it provides can be used to mitigate risk of misguided prediction and to detect out-of-domain examples in a medical setting.