no code implementations • 15 Mar 2024 • Tobias Leemann, Martin Pawelczyk, Bardh Prenkaj, Gjergji Kasneci
We subsequently investigate how different components in the objective functions, e. g., the machine learning model or cost function used to measure distance, determine whether the outcome can be considered an adversarial example or not.
1 code implementation • 4 Aug 2023 • Bardh Prenkaj, Mario Villaizan-Vallelado, Tobias Leemann, Gjergji Kasneci
The GAEs minimise the reconstruction error between the original graph and its learned representation during training.
1 code implementation • NeurIPS 2023 • Tobias Leemann, Martin Pawelczyk, Gjergji Kasneci
In particular, we derive a parametric family of $f$-MIP guarantees that we refer to as $\mu$-Gaussian Membership Inference Privacy ($\mu$-GMIP) by theoretically analyzing likelihood ratio-based membership inference attacks on stochastic gradient descent (SGD).
1 code implementation • 25 Oct 2022 • Tobias Leemann, Martin Pawelczyk, Christian Thomas Eberle, Gjergji Kasneci
In this work, we show that the decision not to share data can be considered as information in itself that should be protected to respect users' privacy.
1 code implementation • 20 Oct 2022 • Yao Rong, Tobias Leemann, Thai-trang Nguyen, Lisa Fiedler, Peizhu Qian, Vaibhav Unhelkar, Tina Seidel, Gjergji Kasneci, Enkelejda Kasneci
A better understanding of the needs of XAI users, as well as human-centered evaluations of explainable models are both a necessity and a challenge.
Explainable Artificial Intelligence (XAI) Explainable Models +2
1 code implementation • 12 Oct 2022 • Vadim Borisov, Kathrin Seßler, Tobias Leemann, Martin Pawelczyk, Gjergji Kasneci
Tabular data is among the oldest and most ubiquitous forms of data.
no code implementations • 30 Aug 2022 • Martin Pawelczyk, Tobias Leemann, Asia Biega, Gjergji Kasneci
Thus, our work raises fundamental questions about the compatibility of "the right to an actionable explanation" in the context of the "right to be forgotten", while also providing constructive insights on the determining factors of recourse robustness.
1 code implementation • 28 Jun 2022 • Tobias Leemann, Michael Kirchhof, Yao Rong, Enkelejda Kasneci, Gjergji Kasneci
Interest in understanding and factorizing learned embedding spaces through conceptual explanations is steadily growing.
1 code implementation • 1 Feb 2022 • Yao Rong, Tobias Leemann, Vadim Borisov, Gjergji Kasneci, Enkelejda Kasneci
With a variety of local feature attribution methods being proposed in recent years, follow-up work suggested several evaluation strategies.
1 code implementation • 6 Oct 2021 • Tobias Leemann, Moritz Sackmann, Jörn Thielecke, Ulrich Hofmann
Many supervised machine learning tasks, such as future state prediction in dynamical systems, require precise modeling of a forecast's uncertainty.
2 code implementations • 5 Oct 2021 • Vadim Borisov, Tobias Leemann, Kathrin Seßler, Johannes Haug, Martin Pawelczyk, Gjergji Kasneci
Moreover, we discuss deep learning approaches for generating tabular data, and we also provide an overview over strategies for explaining deep models on tabular data.