1 code implementation • 11 Jan 2024 • Dilyara Bareeva, Marina M. -C. Höhne, Alexander Warnecke, Lukas Pirch, Klaus-Robert Müller, Konrad Rieck, Kirill Bykov
Deep Neural Networks (DNNs) are capable of learning complex and versatile representations, however, the semantic nature of the learned concepts remains unknown.
no code implementations • 13 Dec 2023 • Shanghua Liu, Anna Hedström, Deepak Hanike Basavegowda, Cornelia Weltzien, Marina M. -C. Höhne
Grasslands are known for their high biodiversity and ability to provide multiple ecosystem services.
1 code implementation • NeurIPS 2023 • Kirill Bykov, Laura Kopf, Shinichi Nakajima, Marius Kloft, Marina M. -C. Höhne
Deep Neural Networks (DNNs) demonstrate remarkable capabilities in learning complex hierarchical data representations, but the nature of these representations remains largely unknown.
1 code implementation • 1 Aug 2023 • Pia Hanfeld, Khaled Wahba, Marina M. -C. Höhne, Michael Bussmann, Wolfgang Hönig
We introduce flying adversarial patches, where multiple images are mounted on at least one other flying robot and therefore can be placed anywhere in the field of view of a victim multirotor.
1 code implementation • 22 May 2023 • Pia Hanfeld, Marina M. -C. Höhne, Michael Bussmann, Wolfgang Hönig
We introduce flying adversarial patches, where an image is mounted on another flying robot and therefore can be placed anywhere in the field of view of a victim multirotor.
no code implementations • 9 Mar 2023 • Kirill Bykov, Klaus-Robert Müller, Marina M. -C. Höhne
The utilization of pre-trained networks, especially those trained on ImageNet, has become a common practice in Computer Vision.
1 code implementation • 1 Mar 2023 • Philine Bommer, Marlene Kretschmer, Anna Hedström, Dilyara Bareeva, Marina M. -C. Höhne
We find architecture-dependent performance differences regarding robustness, complexity and localization skills of different XAI methods, highlighting the necessity for research task-specific evaluation.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
1 code implementation • 14 Feb 2023 • Anna Hedström, Philine Bommer, Kristoffer K. Wickstrøm, Wojciech Samek, Sebastian Lapuschkin, Marina M. -C. Höhne
We address this problem through a meta-evaluation of different quality estimators in XAI, which we define as ''the process of evaluating the evaluation method''.
1 code implementation • 9 Jun 2022 • Kirill Bykov, Mayukh Deb, Dennis Grinwald, Klaus-Robert Müller, Marina M. -C. Höhne
Deep Neural Networks (DNNs) excel at learning complex abstractions within their internal representations.
1 code implementation • NeurIPS 2023 • Anna Hedström, Leander Weber, Dilyara Bareeva, Daniel Krakowczyk, Franz Motzkus, Wojciech Samek, Sebastian Lapuschkin, Marina M. -C. Höhne
The evaluation of explanation methods is a research topic that has not yet been explored deeply, however, since explainability is supposed to strengthen trust in artificial intelligence, it is necessary to systematically review and compare explanation methods in order to confirm their correctness.
no code implementations • 26 Jan 2022 • Dennis Grinwald, Kirill Bykov, Shinichi Nakajima, Marina M. -C. Höhne
Explainable Artificial Intelligence (XAI) aims to make learning machines less opaque, and offers researchers and practitioners various tools to reveal the decision-making strategies of neural networks.
no code implementations • 10 Jan 2022 • Srishti Gautam, Marina M. -C. Höhne, Stine Hansen, Robert Jenssen, Michael Kampffmeyer
The recent trend of integrating multi-source Chest X-Ray datasets to improve automated diagnostics raises concerns that models learn to exploit source-specific correlations to improve performance by recognizing the source domain of an image rather than the medical pathology.
no code implementations • 29 Sep 2021 • Yamen Ali, Aiham Taleb, Marina M. -C. Höhne, Christoph Lippert
Self-supervised learning methods can be used to learn meaningful representations from unlabeled data that can be transferred to supervised downstream tasks to reduce the need for labeled data.
no code implementations • 27 Aug 2021 • Srishti Gautam, Marina M. -C. Höhne, Stine Hansen, Robert Jenssen, Michael Kampffmeyer
Current machine learning models have shown high efficiency in solving a wide variety of real-world problems.
no code implementations • 23 Aug 2021 • Kirill Bykov, Marina M. -C. Höhne, Adelaida Creosteanu, Klaus-Robert Müller, Frederick Klauschen, Shinichi Nakajima, Marius Kloft
Bayesian approaches such as Bayesian Neural Networks (BNNs) so far have a limited form of transparency (model transparency) already built-in through their prior weight distribution, but notably, they lack explanations of their predictions for given instances.
2 code implementations • 18 Jun 2021 • Kirill Bykov, Anna Hedström, Shinichi Nakajima, Marina M. -C. Höhne
For local explanation, stochasticity is known to help: a simple method, called SmoothGrad, has improved the visual quality of gradient-based attribution by adding noise to the input space and averaging the explanations of the noisy inputs.
1 code implementation • 16 Jun 2020 • Kirill Bykov, Marina M. -C. Höhne, Klaus-Robert Müller, Shinichi Nakajima, Marius Kloft
Explainable AI (XAI) aims to provide interpretations for predictions made by learning machines, such as deep neural networks, in order to make the machines more transparent for the user and furthermore trustworthy also for applications in e. g. safety-critical areas.