Search Results for author: Kirill Bykov

Found 8 papers, 5 papers with code

Manipulating Feature Visualizations with Gradient Slingshots

1 code implementation11 Jan 2024 Dilyara Bareeva, Marina M. -C. Höhne, Alexander Warnecke, Lukas Pirch, Klaus-Robert Müller, Konrad Rieck, Kirill Bykov

Deep Neural Networks (DNNs) are capable of learning complex and versatile representations, however, the semantic nature of the learned concepts remains unknown.

Decision Making

Labeling Neural Representations with Inverse Recognition

1 code implementation NeurIPS 2023 Kirill Bykov, Laura Kopf, Shinichi Nakajima, Marius Kloft, Marina M. -C. Höhne

Deep Neural Networks (DNNs) demonstrate remarkable capabilities in learning complex hierarchical data representations, but the nature of these representations remains largely unknown.

Decision Making Segmentation

Mark My Words: Dangers of Watermarked Images in ImageNet

no code implementations9 Mar 2023 Kirill Bykov, Klaus-Robert Müller, Marina M. -C. Höhne

The utilization of pre-trained networks, especially those trained on ImageNet, has become a common practice in Computer Vision.

DORA: Exploring Outlier Representations in Deep Neural Networks

1 code implementation9 Jun 2022 Kirill Bykov, Mayukh Deb, Dennis Grinwald, Klaus-Robert Müller, Marina M. -C. Höhne

Deep Neural Networks (DNNs) excel at learning complex abstractions within their internal representations.

Decision Making

Visualizing the Diversity of Representations Learned by Bayesian Neural Networks

no code implementations26 Jan 2022 Dennis Grinwald, Kirill Bykov, Shinichi Nakajima, Marina M. -C. Höhne

Explainable Artificial Intelligence (XAI) aims to make learning machines less opaque, and offers researchers and practitioners various tools to reveal the decision-making strategies of neural networks.

Contrastive Learning Decision Making +2

Explaining Bayesian Neural Networks

no code implementations23 Aug 2021 Kirill Bykov, Marina M. -C. Höhne, Adelaida Creosteanu, Klaus-Robert Müller, Frederick Klauschen, Shinichi Nakajima, Marius Kloft

Bayesian approaches such as Bayesian Neural Networks (BNNs) so far have a limited form of transparency (model transparency) already built-in through their prior weight distribution, but notably, they lack explanations of their predictions for given instances.

Decision Making Explainable Artificial Intelligence (XAI)

NoiseGrad: Enhancing Explanations by Introducing Stochasticity to Model Weights

2 code implementations18 Jun 2021 Kirill Bykov, Anna Hedström, Shinichi Nakajima, Marina M. -C. Höhne

For local explanation, stochasticity is known to help: a simple method, called SmoothGrad, has improved the visual quality of gradient-based attribution by adding noise to the input space and averaging the explanations of the noisy inputs.

Decision Making

How Much Can I Trust You? -- Quantifying Uncertainties in Explaining Neural Networks

1 code implementation16 Jun 2020 Kirill Bykov, Marina M. -C. Höhne, Klaus-Robert Müller, Shinichi Nakajima, Marius Kloft

Explainable AI (XAI) aims to provide interpretations for predictions made by learning machines, such as deep neural networks, in order to make the machines more transparent for the user and furthermore trustworthy also for applications in e. g. safety-critical areas.

Explainable Artificial Intelligence (XAI)

Cannot find the paper you are looking for? You can Submit a new open access paper.