no code implementations • 25 Mar 2024 • Georgii Mikriukov, Gesina Schwalbe, Franz Motzkus, Korinna Bade
Adversarial attacks (AAs) pose a significant threat to the reliability and robustness of deep neural networks.
no code implementations • 24 Nov 2023 • Georgii Mikriukov, Gesina Schwalbe, Christian Hellert, Korinna Bade
The latter, though, is of particular interest for debugging, like finding and understanding outliers, learned notions of sub-concepts, and concept confusion.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
no code implementations • 30 Apr 2023 • Georgii Mikriukov, Gesina Schwalbe, Christian Hellert, Korinna Bade
These allow insights into both the flow and likeness of semantic information within CNN layers, and into the degree of their similarity between different network architectures.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +1
no code implementations • 28 Apr 2023 • Georgii Mikriukov, Gesina Schwalbe, Christian Hellert, Korinna Bade
The guiding use-case is a post-hoc explainability framework for object detection (OD) CNNs, towards which existing concept analysis (CA) methods are successfully adapted.
Dimensionality Reduction Explainable artificial intelligence +4