Search Results for author: Klaus-Robert Müller

Found 113 papers, 42 papers with code

Scrutinizing XAI using linear ground-truth data with suppressor variables

1 code implementation14 Nov 2021 Rick Wilming, Céline Budding, Klaus-Robert Müller, Stefan Haufe

It has been demonstrated that some saliency methods can highlight features that have no statistical association with the prediction target (suppressor variables).

Feature Importance

Evaluating deep transfer learning for whole-brain cognitive decoding

1 code implementation1 Nov 2021 Armin W. Thomas, Ulman Lindenberger, Wojciech Samek, Klaus-Robert Müller

Here, we systematically evaluate TL for the application of DL models to the decoding of cognitive states (e. g., viewing images of faces or houses) from whole-brain functional Magnetic Resonance Imaging (fMRI) data.

Transfer Learning

Explaining Bayesian Neural Networks

no code implementations23 Aug 2021 Kirill Bykov, Marina M. -C. Höhne, Adelaida Creosteanu, Klaus-Robert Müller, Frederick Klauschen, Shinichi Nakajima, Marius Kloft

Bayesian approaches such as Bayesian Neural Networks (BNNs) so far have a limited form of transparency (model transparency) already built-in through their prior weight distribution, but notably, they lack explanations of their predictions for given instances.

Decision Making

On the Robustness of Pretraining and Self-Supervision for a Deep Learning-based Analysis of Diabetic Retinopathy

no code implementations25 Jun 2021 Vignesh Srinivasan, Nils Strodthoff, Jackie Ma, Alexander Binder, Klaus-Robert Müller, Wojciech Samek

Our results indicate that models initialized from ImageNet pretraining report a significant increase in performance, generalization and robustness to image distortions.

Contrastive Learning Diabetic Retinopathy Grading

Learning Domain Invariant Representations by Joint Wasserstein Distance Minimization

1 code implementation9 Jun 2021 Léo Andéol, Yusei Kawakami, Yuichiro Wada, Takafumi Kanamori, Klaus-Robert Müller, Grégoire Montavon

Domain shifts in the training data are common in practical applications of machine learning, they occur for instance when the data is coming from different sources.

BIGDML: Towards Exact Machine Learning Force Fields for Materials

no code implementations8 Jun 2021 Huziel E. Sauceda, Luis E. Gálvez-González, Stefan Chmiela, Lauro Oliver Paz-Borbón, Klaus-Robert Müller, Alexandre Tkatchenko

Machine-learning force fields (MLFF) should be accurate, computationally and data efficient, and applicable to molecules, materials, and interfaces thereof.

SE(3)-equivariant prediction of molecular wavefunctions and electronic densities

no code implementations NeurIPS 2021 Oliver T. Unke, Mihail Bogojeski, Michael Gastegger, Mario Geiger, Tess Smidt, Klaus-Robert Müller

Machine learning has enabled the prediction of quantum chemical properties with high accuracy and efficiency, allowing to bypass computationally costly ab initio calculations.

Transfer Learning

SpookyNet: Learning Force Fields with Electronic Degrees of Freedom and Nonlocal Effects

no code implementations1 May 2021 Oliver T. Unke, Stefan Chmiela, Michael Gastegger, Kristof T. Schütt, Huziel E. Sauceda, Klaus-Robert Müller

Machine-learned force fields (ML-FFs) combine the accuracy of ab initio methods with the efficiency of conventional force fields.

Towards Robust Explanations for Deep Neural Networks

no code implementations18 Dec 2020 Ann-Kathrin Dombrowski, Christopher J. Anders, Klaus-Robert Müller, Pan Kessel

Explanation methods shed light on the decision process of black-box classifiers such as deep neural networks.

Machine learning of solvent effects on molecular spectra and reactions

1 code implementation28 Oct 2020 Michael Gastegger, Kristof T. Schütt, Klaus-Robert Müller

We employ FieldSchNet to study the influence of solvent effects on molecular spectra and a Claisen rearrangement reaction.

Machine Learning Force Fields

no code implementations14 Oct 2020 Oliver T. Unke, Stefan Chmiela, Huziel E. Sauceda, Michael Gastegger, Igor Poltavsky, Kristof T. Schütt, Alexandre Tkatchenko, Klaus-Robert Müller

In recent years, the use of Machine Learning (ML) in computational chemistry has enabled numerous advances previously out of reach due to the computational complexity of traditional electronic-structure methods.

A Unifying Review of Deep and Shallow Anomaly Detection

no code implementations24 Sep 2020 Lukas Ruff, Jacob R. Kauffmann, Robert A. Vandermeulen, Grégoire Montavon, Wojciech Samek, Marius Kloft, Thomas G. Dietterich, Klaus-Robert Müller

Deep learning approaches to anomaly detection have recently improved the state of the art in detection performance on complex datasets such as large collections of images or text.

Anomaly Detection

Langevin Cooling for Domain Translation

1 code implementation31 Aug 2020 Vignesh Srinivasan, Klaus-Robert Müller, Wojciech Samek, Shinichi Nakajima

Domain translation is the task of finding correspondence between two domains.

Translation

Explainable Deep One-Class Classification

1 code implementation ICLR 2021 Philipp Liznerski, Lukas Ruff, Robert A. Vandermeulen, Billy Joe Franks, Marius Kloft, Klaus-Robert Müller

Deep one-class classification variants for anomaly detection learn a mapping that concentrates nominal samples in feature space causing anomalies to be mapped away.

Ranked #13 on Anomaly Detection on MVTec AD (Segmentation AUROC metric, using extra training data)

Anomaly Detection Classification +2

The Clever Hans Effect in Anomaly Detection

no code implementations18 Jun 2020 Jacob Kauffmann, Lukas Ruff, Grégoire Montavon, Klaus-Robert Müller

The 'Clever Hans' effect occurs when the learned model produces correct predictions based on the 'wrong' features.

Anomaly Detection Outlier Detection

How Much Can I Trust You? -- Quantifying Uncertainties in Explaining Neural Networks

1 code implementation16 Jun 2020 Kirill Bykov, Marina M. -C. Höhne, Klaus-Robert Müller, Shinichi Nakajima, Marius Kloft

Explainable AI (XAI) aims to provide interpretations for predictions made by learning machines, such as deep neural networks, in order to make the machines more transparent for the user and furthermore trustworthy also for applications in e. g. safety-critical areas.

Higher-Order Explanations of Graph Neural Networks via Relevant Walks

no code implementations5 Jun 2020 Thomas Schnake, Oliver Eberle, Jonas Lederer, Shinichi Nakajima, Kristof T. Schütt, Klaus-Robert Müller, Grégoire Montavon

In this paper, we show that GNNs can in fact be naturally explained using higher-order expansions, i. e. by identifying groups of edges that jointly contribute to the prediction.

Image Classification Sentiment Analysis

Rethinking Assumptions in Deep Anomaly Detection

1 code implementation30 May 2020 Lukas Ruff, Robert A. Vandermeulen, Billy Joe Franks, Klaus-Robert Müller, Marius Kloft

Though anomaly detection (AD) can be viewed as a classification problem (nominal vs. anomalous) it is usually treated in an unsupervised manner since one typically does not have access to, or it is infeasible to utilize, a dataset that sufficiently characterizes what it means to be "anomalous."

Anomaly Detection

Ensemble Learning of Coarse-Grained Molecular Dynamics Force Fields with a Kernel Approach

no code implementations4 May 2020 Jiang Wang, Stefan Chmiela, Klaus-Robert Müller, Frank Noè, Cecilia Clementi

Using ensemble learning and stratified sampling, we propose a 2-layer training scheme that enables GDML to learn an effective coarse-grained model.

Ensemble Learning

Risk Estimation of SARS-CoV-2 Transmission from Bluetooth Low Energy Measurements

no code implementations22 Apr 2020 Felix Sattler, Jackie Ma, Patrick Wagner, David Neumann, Markus Wenzel, Ralf Schäfer, Wojciech Samek, Klaus-Robert Müller, Thomas Wiegand

Digital contact tracing approaches based on Bluetooth low energy (BLE) have the potential to efficiently contain and delay outbreaks of infectious diseases such as the ongoing SARS-CoV-2 pandemic.

Automatic Identification of Types of Alterations in Historical Manuscripts

no code implementations20 Mar 2020 David Lassner, Anne Baillot, Sergej Dogadov, Klaus-Robert Müller, Shinichi Nakajima

In addition to the findings based on the digital scholarly edition Berlin Intellectuals, we present a general framework for the analysis of text genesis that can be used in the context of other digital resources representing document variants.

Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications

no code implementations17 Mar 2020 Wojciech Samek, Grégoire Montavon, Sebastian Lapuschkin, Christopher J. Anders, Klaus-Robert Müller

With the broader and highly successful usage of machine learning in industry and the sciences, there has been a growing demand for Explainable AI.

Interpretable Machine Learning

Building and Interpreting Deep Similarity Models

1 code implementation11 Mar 2020 Oliver Eberle, Jochen Büttner, Florian Kräutli, Klaus-Robert Müller, Matteo Valleriani, Grégoire Montavon

Many learning algorithms such as kernel machines, nearest neighbors, clustering, or anomaly detection, are based on the concept of 'distance' or 'similarity'.

Anomaly Detection

Autonomous robotic nanofabrication with reinforcement learning

1 code implementation27 Feb 2020 Philipp Leinen, Malte Esders, Kristof T. Schütt, Christian Wagner, Klaus-Robert Müller, F. Stefan Tautz

Here, we present a strategy to work around both obstacles, and demonstrate autonomous robotic nanofabrication by manipulating single molecules.

Forecasting Industrial Aging Processes with Machine Learning Methods

no code implementations5 Feb 2020 Mihail Bogojeski, Simeon Sauer, Franziska Horn, Klaus-Robert Müller

Accurately predicting industrial aging processes makes it possible to schedule maintenance events further in advance, ensuring a cost-efficient and reliable operation of the plant.

Finding and Removing Clever Hans: Using Explanation Methods to Debug and Improve Deep Models

2 code implementations22 Dec 2019 Christopher J. Anders, Leander Weber, David Neumann, Wojciech Samek, Klaus-Robert Müller, Sebastian Lapuschkin

Based on a recent technique - Spectral Relevance Analysis - we propose the following technical contributions and resulting findings: (a) a scalable quantification of artifactual and poisoned classes where the machine learning models under study exhibit CH behavior, (b) several approaches denoted as Class Artifact Compensation (ClArC), which are able to effectively and significantly reduce a model's CH behavior.

Fine-tuning

Pruning by Explaining: A Novel Criterion for Deep Neural Network Pruning

1 code implementation18 Dec 2019 Seul-Ki Yeom, Philipp Seegerer, Sebastian Lapuschkin, Alexander Binder, Simon Wiedemann, Klaus-Robert Müller, Wojciech Samek

The success of convolutional neural networks (CNNs) in various applications is accompanied by a significant increase in computation and parameter storage costs.

Fine-tuning Model Compression +2

Clustered Federated Learning: Model-Agnostic Distributed Multi-Task Optimization under Privacy Constraints

1 code implementation4 Oct 2019 Felix Sattler, Klaus-Robert Müller, Wojciech Samek

Federated Learning (FL) is currently the most widely adopted framework for collaborative training of (deep) machine learning models under privacy constraints.

Federated Learning Multi-Task Learning

Explaining and Interpreting LSTMs

no code implementations25 Sep 2019 Leila Arras, Jose A. Arjona-Medina, Michael Widrich, Grégoire Montavon, Michael Gillhofer, Klaus-Robert Müller, Sepp Hochreiter, Wojciech Samek

While neural networks have acted as a strong unifying force in the design of modern AI systems, the neural network architectures themselves remain highly heterogeneous due to the variety of tasks to be solved.

Deep Transfer Learning For Whole-Brain fMRI Analyses

no code implementations2 Jul 2019 Armin W. Thomas, Klaus-Robert Müller, Wojciech Samek

Even further, the pre-trained DL model variant is already able to correctly decode 67. 51% of the cognitive states from a test dataset with 100 individuals, when fine-tuned on a dataset of the size of only three subjects.

Transfer Learning

From Clustering to Cluster Explanations via Neural Networks

no code implementations18 Jun 2019 Jacob Kauffmann, Malte Esders, Grégoire Montavon, Wojciech Samek, Klaus-Robert Müller

A wealth of algorithms have been developed to extract natural cluster structure in data.

Evaluating Recurrent Neural Network Explanations

1 code implementation WS 2019 Leila Arras, Ahmed Osman, Klaus-Robert Müller, Wojciech Samek

Recently, several methods have been proposed to explain the predictions of recurrent neural networks (RNNs), in particular of LSTMs.

Sentiment Analysis

Black-Box Decision based Adversarial Attack with Symmetric $α$-stable Distribution

no code implementations11 Apr 2019 Vignesh Srinivasan, Ercan E. Kuruoglu, Klaus-Robert Müller, Wojciech Samek, Shinichi Nakajima

Many existing methods employ Gaussian random variables for exploring the data space to find the most adversarial (for attacking) or least adversarial (for defense) point.

Adversarial Attack

Comment on "Solving Statistical Mechanics Using VANs": Introducing saVANt - VANs Enhanced by Importance and MCMC Sampling

no code implementations26 Mar 2019 Kim Nicoli, Pan Kessel, Nils Strodthoff, Wojciech Samek, Klaus-Robert Müller, Shinichi Nakajima

In this comment on "Solving Statistical Mechanics Using Variational Autoregressive Networks" by Wu et al., we propose a subtle yet powerful modification of their approach.

Robust and Communication-Efficient Federated Learning from Non-IID Data

1 code implementation7 Mar 2019 Felix Sattler, Simon Wiedemann, Klaus-Robert Müller, Wojciech Samek

Federated Learning allows multiple parties to jointly train a deep learning model on their combined data, without any of the participants having to reveal their local data to a centralized server.

Federated Learning

Estimating Local Function Complexity via Mixture of Gaussian Processes

no code implementations27 Feb 2019 Danny Panknin, Shinichi Nakajima, Thanh Binh Bui, Klaus-Robert Müller

Real world data often exhibit inhomogeneity, e. g., the noise level, the sampling distribution or the complexity of the target function may change over the input space.

Active Learning Gaussian Processes

Unmasking Clever Hans Predictors and Assessing What Machines Really Learn

1 code implementation26 Feb 2019 Sebastian Lapuschkin, Stephan Wäldchen, Alexander Binder, Grégoire Montavon, Wojciech Samek, Klaus-Robert Müller

Current learning machines have successfully solved hard application problems, reaching high accuracy and displaying seemingly "intelligent" behavior.

Molecular Force Fields with Gradient-Domain Machine Learning: Construction and Application to Dynamics of Small Molecules with Coupled Cluster Forces

1 code implementation19 Jan 2019 Huziel E. Sauceda, Stefan Chmiela, Igor Poltavsky, Klaus-Robert Müller, Alexandre Tkatchenko

The analysis of sGDML molecular dynamics trajectories yields new qualitative insights into dynamics and spectroscopy of small molecules close to spectroscopic accuracy.

Chemical Physics Computational Physics Data Analysis, Statistics and Probability

Automating the search for a patent's prior art with a full text similarity search

1 code implementation10 Jan 2019 Lea Helmers, Franziska Horn, Franziska Biegler, Tim Oppermann, Klaus-Robert Müller

The evaluation results show that our automated approach, besides accelerating the search process, also improves the search results for prior art with respect to their quality.

Entropy-Constrained Training of Deep Neural Networks

no code implementations18 Dec 2018 Simon Wiedemann, Arturo Marban, Klaus-Robert Müller, Wojciech Samek

We propose a general framework for neural network compression that is motivated by the Minimum Description Length (MDL) principle.

Neural Network Compression

sGDML: Constructing Accurate and Data Efficient Molecular Force Fields Using Machine Learning

1 code implementation12 Dec 2018 Stefan Chmiela, Huziel E. Sauceda, Igor Poltavsky, Klaus-Robert Müller, Alexandre Tkatchenko

We present an optimized implementation of the recently proposed symmetric gradient domain machine learning (sGDML) model.

Computational Physics

Learning representations of molecules and materials with atomistic neural networks

no code implementations11 Dec 2018 Kristof T. Schütt, Alexandre Tkatchenko, Klaus-Robert Müller

Deep Learning has been shown to learn efficient representations for structured data such as image, text or audio.

Analyzing Neuroimaging Data Through Recurrent Deep Learning Models

1 code implementation23 Oct 2018 Armin W. Thomas, Hauke R. Heekeren, Klaus-Robert Müller, Wojciech Samek

We further demonstrate DeepLight's ability to study the fine-grained temporo-spatial variability of brain activity over sequences of single fMRI samples.

Explaining the Unique Nature of Individual Gait Patterns with Deep Learning

1 code implementation13 Aug 2018 Fabian Horst, Sebastian Lapuschkin, Wojciech Samek, Klaus-Robert Müller, Wolfgang I. Schöllhorn

Machine learning (ML) techniques such as (deep) artificial neural networks (DNN) are solving very successfully a plethora of tasks and provide new predictive models for complex physical, chemical, biological and social systems.

iNNvestigate neural networks!

1 code implementation13 Aug 2018 Maximilian Alber, Sebastian Lapuschkin, Philipp Seegerer, Miriam Hägele, Kristof T. Schütt, Grégoire Montavon, Wojciech Samek, Klaus-Robert Müller, Sven Dähne, Pieter-Jan Kindermans

The presented library iNNvestigate addresses this by providing a common interface and out-of-the- box implementation for many analysis methods, including the reference implementation for PatternNet and PatternAttribution as well as for LRP-methods.

Interpretable Machine Learning

Interpreting and Explaining Deep Neural Networks for Classification of Audio Signals

2 code implementations9 Jul 2018 Sören Becker, Marcel Ackermann, Sebastian Lapuschkin, Klaus-Robert Müller, Wojciech Samek

Interpretability of deep neural networks is a recently emerging area of machine learning research targeting a better understanding of how models perform feature selection and derive their classification decisions.

Audio Classification Decision Making +2

Quantum-chemical insights from interpretable atomistic neural networks

no code implementations27 Jun 2018 Kristof T. Schütt, Michael Gastegger, Alexandre Tkatchenko, Klaus-Robert Müller

With the rise of deep neural networks for quantum chemistry applications, there is a pressing need for architectures that, beyond delivering accurate predictions of chemical properties, are readily interpretable by researchers.

Understanding Patch-Based Learning by Explaining Predictions

no code implementations11 Jun 2018 Christopher Anders, Grégoire Montavon, Wojciech Samek, Klaus-Robert Müller

We apply the deep Taylor / LRP technique to understand the deep network's classification decisions, and identify a "border effect": a tendency of the classifier to look mainly at the bordering frames of the input.

General Classification

Compact and Computationally Efficient Representation of Deep Neural Networks

no code implementations27 May 2018 Simon Wiedemann, Klaus-Robert Müller, Wojciech Samek

These new matrix formats have the novel property that their memory and algorithmic complexity are implicitly bounded by the entropy of the matrix, consequently implying that they are guaranteed to become more efficient as the entropy of the matrix is being reduced.

Sparse Binary Compression: Towards Distributed Deep Learning with minimal Communication

no code implementations22 May 2018 Felix Sattler, Simon Wiedemann, Klaus-Robert Müller, Wojciech Samek

A major issue in distributed training is the limited communication bandwidth between contributing nodes or prohibitive communication cost in general.

Binarization

Towards Explaining Anomalies: A Deep Taylor Decomposition of One-Class Models

no code implementations16 May 2018 Jacob Kauffmann, Klaus-Robert Müller, Grégoire Montavon

The proposed One-Class DTD is applicable to a number of common distance-based SVM kernels and is able to reliably explain a wide set of data anomalies.

Edge Detection

Towards Exact Molecular Dynamics Simulations with Machine-Learned Force Fields

1 code implementation26 Feb 2018 Stefan Chmiela, Huziel E. Sauceda, Klaus-Robert Müller, Alexandre Tkatchenko

Molecular dynamics (MD) simulations employing classical force fields constitute the cornerstone of contemporary atomistic modeling in chemistry, biology, and materials science.

Chemical Physics

SchNet - a deep learning architecture for molecules and materials

5 code implementations J. Chem. Phys. 2017 Kristof T. Schütt, Huziel E. Sauceda, Pieter-Jan Kindermans, Alexandre Tkatchenko, Klaus-Robert Müller

Deep learning has led to a paradigm shift in artificial intelligence, including web, text and image search, speech recognition, as well as bioinformatics, with growing impact in chemical physics.

Formation Energy Chemical Physics Materials Science

Optimizing for Measure of Performance in Max-Margin Parsing

no code implementations5 Sep 2017 Alexander Bauer, Shinichi Nakajima, Nico Görnitz, Klaus-Robert Müller

Many statistical learning problems in the area of natural language processing including sequence tagging, sequence segmentation and syntactic parsing has been successfully approached by means of structured prediction methods.

Constituency Parsing Structured Prediction

Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models

no code implementations28 Aug 2017 Wojciech Samek, Thomas Wiegand, Klaus-Robert Müller

With the availability of large databases and recent improvements in deep learning methodology, the performance of AI systems is reaching or even exceeding the human level on an increasing number of complex tasks.

Explainable artificial intelligence General Classification +2

Minimizing Trust Leaks for Robust Sybil Detection

no code implementations ICML 2017 János Höner, Shinichi Nakajima, Alexander Bauer, Klaus-Robert Müller, Nico Görnitz

Sybil detection is a crucial task to protect online social networks (OSNs) against intruders who try to manipulate automatic services provided by OSNs to their customers.

Discovering topics in text datasets by visualizing relevant words

1 code implementation18 Jul 2017 Franziska Horn, Leila Arras, Grégoire Montavon, Klaus-Robert Müller, Wojciech Samek

When dealing with large collections of documents, it is imperative to quickly get an overview of the texts' contents.

Exploring text datasets by visualizing relevant words

2 code implementations17 Jul 2017 Franziska Horn, Leila Arras, Grégoire Montavon, Klaus-Robert Müller, Wojciech Samek

When working with a new dataset, it is important to first explore and familiarize oneself with it, before applying any advanced machine learning algorithms.

SchNet: A continuous-filter convolutional neural network for modeling quantum interactions

4 code implementations NeurIPS 2017 Kristof T. Schütt, Pieter-Jan Kindermans, Huziel E. Sauceda, Stefan Chmiela, Alexandre Tkatchenko, Klaus-Robert Müller

Deep learning has the potential to revolutionize quantum chemistry as it is ideally suited to learn representations for structured data and speed up the exploration of chemical space.

Formation Energy

Methods for Interpreting and Understanding Deep Neural Networks

no code implementations24 Jun 2017 Grégoire Montavon, Wojciech Samek, Klaus-Robert Müller

This paper provides an entry point to the problem of interpreting a deep neural network model and explaining its predictions.

Explaining Recurrent Neural Network Predictions in Sentiment Analysis

1 code implementation WS 2017 Leila Arras, Grégoire Montavon, Klaus-Robert Müller, Wojciech Samek

Recently, a technique called Layer-wise Relevance Propagation (LRP) was shown to deliver insightful explanations in the form of input space relevances for understanding feed-forward neural network classification decisions.

General Classification Interpretable Machine Learning +1

Predicting Pairwise Relations with Neural Similarity Encoders

1 code implementation6 Feb 2017 Franziska Horn, Klaus-Robert Müller

Matrix factorization is at the heart of many machine learning algorithms, for example, dimensionality reduction (e. g. kernel PCA) or recommender systems relying on collaborative filtering.

Collaborative Filtering Dimensionality Reduction +1

Wasserstein Training of Restricted Boltzmann Machines

no code implementations NeurIPS 2016 Grégoire Montavon, Klaus-Robert Müller, Marco Cuturi

This metric between observations can then be used to define the Wasserstein distance between the distribution induced by the Boltzmann machine on the one hand, and that given by the training sample on the other hand.

Denoising

Interpreting the Predictions of Complex ML Models by Layer-wise Relevance Propagation

no code implementations24 Nov 2016 Wojciech Samek, Grégoire Montavon, Alexander Binder, Sebastian Lapuschkin, Klaus-Robert Müller

Complex nonlinear models such as deep neural network (DNNs) have become an important tool for image classification, speech recognition, natural language processing, and many other fields of application.

Classification General Classification +2

Feature Importance Measure for Non-linear Learning Algorithms

1 code implementation22 Nov 2016 Marina M. -C. Vidovic, Nico Görnitz, Klaus-Robert Müller, Marius Kloft

MFI is general and can be applied to any arbitrary learning machine (including kernel machines and deep learning).

Feature Importance

By-passing the Kohn-Sham equations with machine learning

no code implementations9 Sep 2016 Felix Brockherde, Leslie Vogt, Li Li, Mark E. Tuckerman, Kieron Burke, Klaus-Robert Müller

Last year, at least 30, 000 scientific papers used the Kohn-Sham scheme of density functional theory to solve electronic structure problems in a wide variety of scientific fields, ranging from materials science to biochemistry to astrophysics.

Language Detection For Short Text Messages In Social Media

no code implementations30 Aug 2016 Ivana Balazevic, Mikio Braun, Klaus-Robert Müller

These approaches include the use of the well-known classifiers such as SVM and logistic regression, a dictionary based approach, and a probabilistic model based on modified Kneser-Ney smoothing.

Object Boundary Detection and Classification with Image-level Labels

no code implementations29 Jun 2016 Jing Yu Koh, Wojciech Samek, Klaus-Robert Müller, Alexander Binder

We propose a novel strategy for solving this task, when pixel-level annotations are not available, performing it in an almost zero-shot manner by relying on conventional whole image neural net classifiers that were trained using large bounding boxes.

Boundary Detection Classification +2

Identifying individual facial expressions by deconstructing a neural network

no code implementations23 Jun 2016 Farhad Arbabzadah, Grégoire Montavon, Klaus-Robert Müller, Wojciech Samek

We further observe that the explanation method provides important insights into the nature of features of the base model, which allow one to assess the aptitude of the base model for a given transfer learning task.

Gender Prediction Transfer Learning

Explaining Predictions of Non-Linear Classifiers in NLP

1 code implementation WS 2016 Leila Arras, Franziska Horn, Grégoire Montavon, Klaus-Robert Müller, Wojciech Samek

Layer-wise relevance propagation (LRP) is a recently proposed technique for explaining predictions of complex non-linear classifiers in terms of input variables.

General Classification Image Classification

Layer-wise Relevance Propagation for Neural Networks with Local Renormalization Layers

no code implementations4 Apr 2016 Alexander Binder, Grégoire Montavon, Sebastian Bach, Klaus-Robert Müller, Wojciech Samek

Layer-wise relevance propagation is a framework which allows to decompose the prediction of a deep neural network computed over a sample, e. g. an image, down to relevance scores for the single input dimensions of the sample such as subpixels of an image.

Controlling Explanatory Heatmap Resolution and Semantics via Decomposition Depth

no code implementations21 Mar 2016 Sebastian Bach, Alexander Binder, Klaus-Robert Müller, Wojciech Samek

We present an application of the Layer-wise Relevance Propagation (LRP) algorithm to state of the art deep convolutional neural networks and Fisher Vector classifiers to compare the image perception and prediction strategies of both classifiers with the use of visualized heatmaps.

Explaining NonLinear Classification Decisions with Deep Taylor Decomposition

4 code implementations8 Dec 2015 Grégoire Montavon, Sebastian Bach, Alexander Binder, Wojciech Samek, Klaus-Robert Müller

Although our focus is on image classification, the method is applicable to a broad set of input data, learning tasks and network architectures.

Action Recognition Classification +2

Validity of time reversal for testing Granger causality

no code implementations25 Sep 2015 Irene Winkler, Danny Panknin, Daniel Bartz, Klaus-Robert Müller, Stefan Haufe

Inferring causal interactions from observed data is a challenging problem, especially in the presence of measurement noise.

Evaluating the visualization of what a Deep Neural Network has learned

1 code implementation21 Sep 2015 Wojciech Samek, Alexander Binder, Grégoire Montavon, Sebastian Bach, Klaus-Robert Müller

Our main result is that the recently proposed Layer-wise Relevance Propagation (LRP) algorithm qualitatively and quantitatively provides a better explanation of what made a DNN arrive at a particular classification decision than the sensitivity-based approach or the deconvolution method.

Classification General Classification +2

Wasserstein Training of Boltzmann Machines

no code implementations7 Jul 2015 Grégoire Montavon, Klaus-Robert Müller, Marco Cuturi

The Boltzmann machine provides a useful framework to learn highly complex, multimodal and multiscale data distributions that occur in the real world.

Denoising

Multi-Target Shrinkage

no code implementations5 Dec 2014 Daniel Bartz, Johannes Höhne, Klaus-Robert Müller

For the sample mean and the sample covariance as specific instances, we derive conditions under which the optimality of MTS is applicable.

Learning with Algebraic Invariances, and the Invariant Kernel Trick

no code implementations28 Nov 2014 Franz J. Király, Andreas Ziehe, Klaus-Robert Müller

When solving data analysis problems it is important to integrate prior knowledge and/or structural invariances.

Understanding Machine-learned Density Functionals

no code implementations4 Apr 2014 Li Li, John C. Snyder, Isabelle M. Pelaschier, Jessica Huang, Uma-Naresh Niranjan, Paul Duncan, Matthias Rupp, Klaus-Robert Müller, Kieron Burke

Kernel ridge regression is used to approximate the kinetic energy of non-interacting fermions in a one-dimensional box as a functional of their density.

Generalizing Analytic Shrinkage for Arbitrary Covariance Structures

no code implementations NeurIPS 2013 Daniel Bartz, Klaus-Robert Müller

Analytic shrinkage is a statistical technique that offers a fast alternative to cross-validation for the regularization of covariance matrices and has appealing consistency properties.

Optical Character Recognition

Robust Spatial Filtering with Beta Divergence

no code implementations NeurIPS 2013 Wojciech Samek, Duncan Blythe, Klaus-Robert Müller, Motoaki Kawanabe

The efficiency of Brain-Computer Interfaces (BCI) largely depends upon a reliable extraction of informative features from the high-dimensional EEG signal.

EEG

Multiple Kernel Learning for Brain-Computer Interfacing

no code implementations22 Oct 2013 Wojciech Samek, Alexander Binder, Klaus-Robert Müller

Combining information from different sources is a common way to improve classification accuracy in Brain-Computer Interfacing (BCI).

General Classification

Orbital-free Bond Breaking via Machine Learning

no code implementations7 Jun 2013 John C. Snyder, Matthias Rupp, Katja Hansen, Leo Blooston, Klaus-Robert Müller, Kieron Burke

Machine learning is used to approximate the kinetic energy of one dimensional diatomics as a functional of the electron density.

Transferring Subspaces Between Subjects in Brain-Computer Interfacing

no code implementations18 Sep 2012 Wojciech Samek, Frank C. Meinecke, Klaus-Robert Müller

Compensating changes between a subjects' training and testing session in Brain Computer Interfacing (BCI) is challenging but of great importance for a robust BCI operation.

EEG

Regression for sets of polynomial equations

no code implementations20 Oct 2011 Franz Johannes Király, Paul von Bünau, Jan Saputra Müller, Duncan Blythe, Frank Meinecke, Klaus-Robert Müller

We propose a method called ideal regression for approximating an arbitrary system of polynomial equations by a system of a particular type.

Layer-wise analysis of deep networks with Gaussian kernels

no code implementations NeurIPS 2010 Grégoire Montavon, Klaus-Robert Müller, Mikio L. Braun

Deep networks can potentially express a learning problem more efficiently than local learning machines.

Efficient and Accurate Lp-Norm Multiple Kernel Learning

no code implementations NeurIPS 2009 Marius Kloft, Ulf Brefeld, Pavel Laskov, Klaus-Robert Müller, Alexander Zien, Sören Sonnenburg

Previous approaches to multiple kernel learning (MKL) promote sparse kernel combinations and hence support interpretability.

Subject independent EEG-based BCI decoding

no code implementations NeurIPS 2009 Siamac Fazli, Cristian Grozea, Marton Danoczy, Benjamin Blankertz, Florin Popescu, Klaus-Robert Müller

In the quest to make Brain Computer Interfacing (BCI) more usable, dry electrodes have emerged that get rid of the initial 30 minutes required for placing an electrode cap.

EEG

Playing Pinball with non-invasive BCI

no code implementations NeurIPS 2008 Matthias Krauledat, Konrad Grzeska, Max Sagebaum, Benjamin Blankertz, Carmen Vidaurre, Klaus-Robert Müller, Michael Schröder

Compared to invasive Brain-Computer Interfaces (BCI), non-invasive BCI systems based on Electroencephalogram (EEG) signals have not been applied successfully for complex control tasks.

EEG

Cannot find the paper you are looking for? You can Submit a new open access paper.