1 code implementation • 19 Jun 2024 • Matthew Wicker, Philip Sosnin, Igor Shilov, Adrianna Janik, Mark N. Müller, Yves-Alexandre de Montjoye, Adrian Weller, Calvin Tsay
Differential privacy upper-bounds the information leakage of machine learning models, yet providing meaningful privacy guarantees has proven to be challenging in practice.
no code implementations • 5 Dec 2022 • Adrianna Janik, Luca Costabello
We study the problem of explaining link predictions in the Knowledge Graph Embedding (KGE) models.
no code implementations • 17 Nov 2022 • Adrianna Janik, Maria Torrente, Luca Costabello, Virginia Calvo, Brian Walsh, Carlos Camps, Sameh K. Mohamed, Ana L. Ortega, Vít Nováček, Bartomeu Massutí, Pasquale Minervini, M. Rosario Garcia Campelo, Edel del Barco, Joaquim Bosch-Barrera, Ernestina Menasalvas, Mohan Timilsina, Mariano Provencio
Conclusions: Our results show that machine learning models trained on tabular and graph data can enable objective, personalised and reproducible prediction of relapse and therefore, disease outcome in patients with early-stage NSCLC.
no code implementations • 9 Feb 2022 • Adrianna Janik, Kris Sankaran
We have applied our method to a deep learning model for semantic segmentation, U-Net, in a remote sensing application of building detection - one of the core use cases of remote sensing in humanitarian applications.
1 code implementation • 9 Feb 2022 • Adrianna Janik, Kris Sankaran
Among current formulations, concepts defines them by as a direction in a learned representation space.
no code implementations • 15 Mar 2021 • Adrianna Janik, Jonathan Dodd, Georgiana Ifrim, Kris Sankaran, Kathleen Curran
In previous studies, the base method is applied to the classification of cardiac disease and provides clinically meaningful explanations for the predictions of a black-box deep learning classifier.