no code implementations • 3 Jun 2024 • Franz Motzkus, Christian Hellert, Ute Schmid
The counterfactuals comprise an increased granularity through minimal feature changes.
no code implementations • 27 May 2024 • Franz Motzkus, Georgii Mikriukov, Christian Hellert, Ute Schmid
While global concept encodings generally enable a user to test a model for a specific concept, linking global concept encodings to the local processing of single network inputs reveals their strengths and limitations.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
no code implementations • 25 Mar 2024 • Georgii Mikriukov, Gesina Schwalbe, Franz Motzkus, Korinna Bade
Adversarial attacks (AAs) pose a significant threat to the reliability and robustness of deep neural networks.
no code implementations • 10 May 2022 • Julian Wörmann, Daniel Bogdoll, Christian Brunner, Etienne Bührle, Han Chen, Evaristus Fuh Chuo, Kostadin Cvejoski, Ludger van Elst, Philip Gottschall, Stefan Griesche, Christian Hellert, Christian Hesels, Sebastian Houben, Tim Joseph, Niklas Keil, Johann Kelsch, Mert Keser, Hendrik Königshof, Erwin Kraft, Leonie Kreuser, Kevin Krone, Tobias Latka, Denny Mattern, Stefan Matthes, Franz Motzkus, Mohsin Munir, Moritz Nekolla, Adrian Paschke, Stefan Pilar von Pilchau, Maximilian Alexander Pintz, Tianming Qiu, Faraz Qureishi, Syed Tahseen Raza Rizvi, Jörg Reichardt, Laura von Rueden, Alexander Sagel, Diogo Sasdelli, Tobias Scholl, Gerhard Schunk, Gesina Schwalbe, Hao Shen, Youssef Shoeb, Hendrik Stapelbroek, Vera Stehr, Gurucharan Srinivas, Anh Tuan Tran, Abhishek Vivekanandan, Ya Wang, Florian Wasserrab, Tino Werner, Christian Wirth, Stefan Zwicklbauer
The availability of representative datasets is an essential prerequisite for many successful artificial intelligence and machine learning models.
1 code implementation • NeurIPS 2023 • Anna Hedström, Leander Weber, Dilyara Bareeva, Daniel Krakowczyk, Franz Motzkus, Wojciech Samek, Sebastian Lapuschkin, Marina M. -C. Höhne
The evaluation of explanation methods is a research topic that has not yet been explored deeply, however, since explainability is supposed to strengthen trust in artificial intelligence, it is necessary to systematically review and compare explanation methods in order to confirm their correctness.
no code implementations • 14 Feb 2022 • Franz Motzkus, Leander Weber, Sebastian Lapuschkin
While rule-based attribution methods have proven useful for providing local explanations for Deep Neural Networks, explaining modern and more varied network architectures yields new challenges in generating trustworthy explanations, since the established rule sets might not be sufficient or applicable to novel network structures.