Search Results for author: Christopher J. Anders

Found 7 papers, 4 papers with code

Towards Robust Explanations for Deep Neural Networks

no code implementations18 Dec 2020 Ann-Kathrin Dombrowski, Christopher J. Anders, Klaus-Robert Müller, Pan Kessel

Explanation methods shed light on the decision process of black-box classifiers such as deep neural networks.

Estimation of Thermodynamic Observables in Lattice Field Theories with Deep Generative Models

no code implementations14 Jul 2020 Kim A. Nicoli, Christopher J. Anders, Lena Funcke, Tobias Hartung, Karl Jansen, Pan Kessel, Shinichi Nakajima, Paolo Stornati

In this work, we demonstrate that applying deep generative machine learning models for lattice field theory is a promising route for solving problems where Markov Chain Monte Carlo (MCMC) methods are problematic.

Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications

no code implementations17 Mar 2020 Wojciech Samek, Grégoire Montavon, Sebastian Lapuschkin, Christopher J. Anders, Klaus-Robert Müller

With the broader and highly successful usage of machine learning in industry and the sciences, there has been a growing demand for Explainable AI.

Interpretable Machine Learning

Finding and Removing Clever Hans: Using Explanation Methods to Debug and Improve Deep Models

2 code implementations22 Dec 2019 Christopher J. Anders, Leander Weber, David Neumann, Wojciech Samek, Klaus-Robert Müller, Sebastian Lapuschkin

Based on a recent technique - Spectral Relevance Analysis - we propose the following technical contributions and resulting findings: (a) a scalable quantification of artifactual and poisoned classes where the machine learning models under study exhibit CH behavior, (b) several approaches denoted as Class Artifact Compensation (ClArC), which are able to effectively and significantly reduce a model's CH behavior.

Fine-tuning

Cannot find the paper you are looking for? You can Submit a new open access paper.