Search Results for author: Christopher J. Anders

Found 9 papers, 4 papers with code

Detecting and Mitigating Mode-Collapse for Flow-based Sampling of Lattice Field Theories

no code implementations27 Feb 2023 Kim A. Nicoli, Christopher J. Anders, Tobias Hartung, Karl Jansen, Pan Kessel, Shinichi Nakajima

In this work, we first point out that the tunneling problem is also present for normalizing flows but is shifted from the sampling to the training phase of the algorithm.

PatClArC: Using Pattern Concept Activation Vectors for Noise-Robust Model Debugging

no code implementations7 Feb 2022 Frederik Pahde, Leander Weber, Christopher J. Anders, Wojciech Samek, Sebastian Lapuschkin

We demonstrate that pattern-based artifact modeling has beneficial effects on the application of CAVs as a means to remove influence of confounding features from models via the ClArC framework.

TAG

Towards Robust Explanations for Deep Neural Networks

no code implementations18 Dec 2020 Ann-Kathrin Dombrowski, Christopher J. Anders, Klaus-Robert Müller, Pan Kessel

Explanation methods shed light on the decision process of black-box classifiers such as deep neural networks.

Estimation of Thermodynamic Observables in Lattice Field Theories with Deep Generative Models

no code implementations14 Jul 2020 Kim A. Nicoli, Christopher J. Anders, Lena Funcke, Tobias Hartung, Karl Jansen, Pan Kessel, Shinichi Nakajima, Paolo Stornati

In this work, we demonstrate that applying deep generative machine learning models for lattice field theory is a promising route for solving problems where Markov Chain Monte Carlo (MCMC) methods are problematic.

BIG-bench Machine Learning

Finding and Removing Clever Hans: Using Explanation Methods to Debug and Improve Deep Models

2 code implementations22 Dec 2019 Christopher J. Anders, Leander Weber, David Neumann, Wojciech Samek, Klaus-Robert Müller, Sebastian Lapuschkin

Based on a recent technique - Spectral Relevance Analysis - we propose the following technical contributions and resulting findings: (a) a scalable quantification of artifactual and poisoned classes where the machine learning models under study exhibit CH behavior, (b) several approaches denoted as Class Artifact Compensation (ClArC), which are able to effectively and significantly reduce a model's CH behavior.

Cannot find the paper you are looking for? You can Submit a new open access paper.