1 code implementation • 11 Jan 2024 • Dilyara Bareeva, Marina M. -C. Höhne, Alexander Warnecke, Lukas Pirch, Klaus-Robert Müller, Konrad Rieck, Kirill Bykov
Deep Neural Networks (DNNs) are capable of learning complex and versatile representations, however, the semantic nature of the learned concepts remains unknown.
1 code implementation • NeurIPS 2023 • Kirill Bykov, Laura Kopf, Shinichi Nakajima, Marius Kloft, Marina M. -C. Höhne
Deep Neural Networks (DNNs) demonstrate remarkable capabilities in learning complex hierarchical data representations, but the nature of these representations remains largely unknown.
no code implementations • 9 Mar 2023 • Kirill Bykov, Klaus-Robert Müller, Marina M. -C. Höhne
The utilization of pre-trained networks, especially those trained on ImageNet, has become a common practice in Computer Vision.
1 code implementation • 9 Jun 2022 • Kirill Bykov, Mayukh Deb, Dennis Grinwald, Klaus-Robert Müller, Marina M. -C. Höhne
Deep Neural Networks (DNNs) excel at learning complex abstractions within their internal representations.
no code implementations • 26 Jan 2022 • Dennis Grinwald, Kirill Bykov, Shinichi Nakajima, Marina M. -C. Höhne
Explainable Artificial Intelligence (XAI) aims to make learning machines less opaque, and offers researchers and practitioners various tools to reveal the decision-making strategies of neural networks.
no code implementations • 23 Aug 2021 • Kirill Bykov, Marina M. -C. Höhne, Adelaida Creosteanu, Klaus-Robert Müller, Frederick Klauschen, Shinichi Nakajima, Marius Kloft
Bayesian approaches such as Bayesian Neural Networks (BNNs) so far have a limited form of transparency (model transparency) already built-in through their prior weight distribution, but notably, they lack explanations of their predictions for given instances.
2 code implementations • 18 Jun 2021 • Kirill Bykov, Anna Hedström, Shinichi Nakajima, Marina M. -C. Höhne
For local explanation, stochasticity is known to help: a simple method, called SmoothGrad, has improved the visual quality of gradient-based attribution by adding noise to the input space and averaging the explanations of the noisy inputs.
1 code implementation • 16 Jun 2020 • Kirill Bykov, Marina M. -C. Höhne, Klaus-Robert Müller, Shinichi Nakajima, Marius Kloft
Explainable AI (XAI) aims to provide interpretations for predictions made by learning machines, such as deep neural networks, in order to make the machines more transparent for the user and furthermore trustworthy also for applications in e. g. safety-critical areas.