Search Results for author: Wojciech Samek

Found 80 papers, 36 papers with code

Evaluating deep transfer learning for whole-brain cognitive decoding

1 code implementation1 Nov 2021 Armin W. Thomas, Ulman Lindenberger, Wojciech Samek, Klaus-Robert Müller

Here, we systematically evaluate TL for the application of DL models to the decoding of cognitive states (e. g., viewing images of faces or houses) from whole-brain functional Magnetic Resonance Imaging (fMRI) data.

Transfer Learning

ECQ$^{\text{x}}$: Explainability-Driven Quantization for Low-Bit and Sparse DNNs

no code implementations9 Sep 2021 Daniel Becking, Maximilian Dreyer, Wojciech Samek, Karsten Müller, Sebastian Lapuschkin

The remarkable success of deep neural networks (DNNs) in various applications is accompanied by a significant increase in network parameters and arithmetic operations.


Reward-Based 1-bit Compressed Federated Distillation on Blockchain

no code implementations27 Jun 2021 Leon Witt, Usama Zafar, KuoYeh Shen, Felix Sattler, Dan Li, Wojciech Samek

The recent advent of various forms of Federated Knowledge Distillation (FD) paves the way for a new generation of robust and communication-efficient Federated Learning (FL), where mere soft-labels are aggregated, rather than whole gradients of Deep Neural Networks (DNN) as done in previous FL schemes.

Federated Learning Knowledge Distillation

On the Robustness of Pretraining and Self-Supervision for a Deep Learning-based Analysis of Diabetic Retinopathy

no code implementations25 Jun 2021 Vignesh Srinivasan, Nils Strodthoff, Jackie Ma, Alexander Binder, Klaus-Robert Müller, Wojciech Samek

Our results indicate that models initialized from ImageNet pretraining report a significant increase in performance, generalization and robustness to image distortions.

Contrastive Learning Diabetic Retinopathy Grading

FedAUX: Leveraging Unlabeled Auxiliary Data in Federated Learning

1 code implementation4 Feb 2021 Felix Sattler, Tim Korjakow, Roman Rischke, Wojciech Samek

Federated Distillation (FD) is a popular novel algorithmic paradigm for Federated Learning, which achieves training performance competitive to prior parameter averaging based methods, while additionally allowing the clients to train different model architectures, by distilling the client predictions on an unlabeled auxiliary set of data into a student model.

Federated Learning Unsupervised Pre-training

FantastIC4: A Hardware-Software Co-Design Approach for Efficiently Running 4bit-Compact Multilayer Perceptrons

no code implementations17 Dec 2020 Simon Wiedemann, Suhas Shivapakash, Pablo Wiedemann, Daniel Becking, Wojciech Samek, Friedel Gerfers, Thomas Wiegand

With the growing demand for deploying deep learning models to the "edge", it is paramount to develop techniques that allow to execute state-of-the-art models within very tight and limited resource constraints.


Communication-Efficient Federated Distillation

no code implementations1 Dec 2020 Felix Sattler, Arturo Marban, Roman Rischke, Wojciech Samek

Communication constraints are one of the major challenges preventing the wide-spread adoption of Federated Learning systems.

Federated Learning Image Classification +2

A Unifying Review of Deep and Shallow Anomaly Detection

no code implementations24 Sep 2020 Lukas Ruff, Jacob R. Kauffmann, Robert A. Vandermeulen, Grégoire Montavon, Wojciech Samek, Marius Kloft, Thomas G. Dietterich, Klaus-Robert Müller

Deep learning approaches to anomaly detection have recently improved the state of the art in detection performance on complex datasets such as large collections of images or text.

Anomaly Detection

Langevin Cooling for Domain Translation

1 code implementation31 Aug 2020 Vignesh Srinivasan, Klaus-Robert Müller, Wojciech Samek, Shinichi Nakajima

Domain translation is the task of finding correspondence between two domains.


Explanation-Guided Training for Cross-Domain Few-Shot Classification

1 code implementation17 Jul 2020 Jiamei Sun, Sebastian Lapuschkin, Wojciech Samek, Yunqing Zhao, Ngai-Man Cheung, Alexander Binder

It leverages on the explanation scores, obtained from existing explanation methods when applied to the predictions of FSC models, computed for intermediate feature maps of the models.

Classification Cross-Domain Few-Shot +1

Deep Learning for ECG Analysis: Benchmarks and Insights from PTB-XL

2 code implementations28 Apr 2020 Nils Strodthoff, Patrick Wagner, Tobias Schaeffter, Wojciech Samek

Electrocardiography is a very common, non-invasive diagnostic procedure and its interpretation is increasingly supported by automatic interpretation algorithms.

Gender Prediction Transfer Learning

Risk Estimation of SARS-CoV-2 Transmission from Bluetooth Low Energy Measurements

no code implementations22 Apr 2020 Felix Sattler, Jackie Ma, Patrick Wagner, David Neumann, Markus Wenzel, Ralf Schäfer, Wojciech Samek, Klaus-Robert Müller, Thomas Wiegand

Digital contact tracing approaches based on Bluetooth low energy (BLE) have the potential to efficiently contain and delay outbreaks of infectious diseases such as the ongoing SARS-CoV-2 pandemic.

Understanding Integrated Gradients with SmoothTaylor for Deep Neural Network Attribution

1 code implementation arXiv 2020 Gary S. W. Goh, Sebastian Lapuschkin, Leander Weber, Wojciech Samek, Alexander Binder

From our experiments, we find that the SmoothTaylor approach together with adaptive noising is able to generate better quality saliency maps with lesser noise and higher sensitivity to the relevant points in the input space as compared to Integrated Gradients.

Image Classification Object Recognition

Learning Sparse & Ternary Neural Networks with Entropy-Constrained Trained Ternarization (EC2T)

2 code implementations2 Apr 2020 Arturo Marban, Daniel Becking, Simon Wiedemann, Wojciech Samek

To address this problem, we propose Entropy-Constrained Trained Ternarization (EC2T), a general framework to create sparse and ternary neural networks which are efficient in terms of storage (e. g., at most two binary-masks and two full-precision values are required to save a weight matrix) and computation (e. g., MAC operations are reduced to a few accumulations plus two multiplications).

Image Classification

Interval Neural Networks as Instability Detectors for Image Reconstructions

1 code implementation27 Mar 2020 Jan Macdonald, Maximilian März, Luis Oala, Wojciech Samek

This work investigates the detection of instabilities that may occur when utilizing deep learning models for image reconstruction tasks.

Image Reconstruction

Interval Neural Networks: Uncertainty Scores

1 code implementation25 Mar 2020 Luis Oala, Cosmas Heiß, Jan Macdonald, Maximilian März, Wojciech Samek, Gitta Kutyniok

We propose a fast, non-Bayesian method for producing uncertainty scores in the output of pre-trained deep neural networks (DNNs) using a data-driven interval propagating network.

Image Reconstruction

Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications

no code implementations17 Mar 2020 Wojciech Samek, Grégoire Montavon, Sebastian Lapuschkin, Christopher J. Anders, Klaus-Robert Müller

With the broader and highly successful usage of machine learning in industry and the sciences, there has been a growing demand for Explainable AI.

Interpretable Machine Learning

Ground Truth Evaluation of Neural Network Explanations with CLEVR-XAI

2 code implementations16 Mar 2020 Leila Arras, Ahmed Osman, Wojciech Samek

The rise of deep learning in today's applications entailed an increasing need in explaining the model's decisions beyond prediction performances in order to foster trust and accountability.

Feature Importance Object Localization +2

Trends and Advancements in Deep Neural Network Communication

no code implementations6 Mar 2020 Felix Sattler, Thomas Wiegand, Wojciech Samek

Due to their great performance and scalability properties neural networks have become ubiquitous building blocks of many applications.

Explain and Improve: LRP-Inference Fine-Tuning for Image Captioning Models

1 code implementation4 Jan 2020 Jiamei Sun, Sebastian Lapuschkin, Wojciech Samek, Alexander Binder

We develop variants of layer-wise relevance propagation (LRP) and gradient-based explanation methods, tailored to image captioning models with attention mechanisms.

Fine-tuning Image Captioning

Finding and Removing Clever Hans: Using Explanation Methods to Debug and Improve Deep Models

2 code implementations22 Dec 2019 Christopher J. Anders, Leander Weber, David Neumann, Wojciech Samek, Klaus-Robert Müller, Sebastian Lapuschkin

Based on a recent technique - Spectral Relevance Analysis - we propose the following technical contributions and resulting findings: (a) a scalable quantification of artifactual and poisoned classes where the machine learning models under study exhibit CH behavior, (b) several approaches denoted as Class Artifact Compensation (ClArC), which are able to effectively and significantly reduce a model's CH behavior.


Pruning by Explaining: A Novel Criterion for Deep Neural Network Pruning

1 code implementation18 Dec 2019 Seul-Ki Yeom, Philipp Seegerer, Sebastian Lapuschkin, Alexander Binder, Simon Wiedemann, Klaus-Robert Müller, Wojciech Samek

The success of convolutional neural networks (CNNs) in various applications is accompanied by a significant increase in computation and parameter storage costs.

Fine-tuning Model Compression +2

Asymptotically unbiased estimation of physical observables with neural samplers

no code implementations29 Oct 2019 Kim A. Nicoli, Shinichi Nakajima, Nils Strodthoff, Wojciech Samek, Klaus-Robert Müller, Pan Kessel

We propose a general framework for the estimation of observables with generative neural samplers focusing on modern deep generative neural networks that provide an exact sampling probability.

Clustered Federated Learning: Model-Agnostic Distributed Multi-Task Optimization under Privacy Constraints

1 code implementation4 Oct 2019 Felix Sattler, Klaus-Robert Müller, Wojciech Samek

Federated Learning (FL) is currently the most widely adopted framework for collaborative training of (deep) machine learning models under privacy constraints.

Federated Learning Multi-Task Learning

Explaining and Interpreting LSTMs

no code implementations25 Sep 2019 Leila Arras, Jose A. Arjona-Medina, Michael Widrich, Grégoire Montavon, Michael Gillhofer, Klaus-Robert Müller, Sepp Hochreiter, Wojciech Samek

While neural networks have acted as a strong unifying force in the design of modern AI systems, the neural network architectures themselves remain highly heterogeneous due to the variety of tasks to be solved.

DeepCABAC: A Universal Compression Algorithm for Deep Neural Networks

1 code implementation27 Jul 2019 Simon Wiedemann, Heiner Kirchoffer, Stefan Matlage, Paul Haase, Arturo Marban, Talmaj Marinc, David Neumann, Tung Nguyen, Ahmed Osman, Detlev Marpe, Heiko Schwarz, Thomas Wiegand, Wojciech Samek

The field of video compression has developed some of the most sophisticated and efficient compression algorithms known in the literature, enabling very high compressibility for little loss of information.

Neural Network Compression Quantization +1

Deep Transfer Learning For Whole-Brain fMRI Analyses

no code implementations2 Jul 2019 Armin W. Thomas, Klaus-Robert Müller, Wojciech Samek

Even further, the pre-trained DL model variant is already able to correctly decode 67. 51% of the cognitive states from a test dataset with 100 individuals, when fine-tuned on a dataset of the size of only three subjects.

Transfer Learning

From Clustering to Cluster Explanations via Neural Networks

no code implementations18 Jun 2019 Jacob Kauffmann, Malte Esders, Grégoire Montavon, Wojciech Samek, Klaus-Robert Müller

A wealth of algorithms have been developed to extract natural cluster structure in data.

Achieving Generalizable Robustness of Deep Neural Networks by Stability Training

no code implementations3 Jun 2019 Jan Laermann, Wojciech Samek, Nils Strodthoff

We study the recently introduced stability training as a general-purpose method to increase the robustness of deep neural networks against input perturbations.

Data Augmentation General Classification +1

Evaluating Recurrent Neural Network Explanations

1 code implementation WS 2019 Leila Arras, Ahmed Osman, Klaus-Robert Müller, Wojciech Samek

Recently, several methods have been proposed to explain the predictions of recurrent neural networks (RNNs), in particular of LSTMs.

Sentiment Analysis

Black-Box Decision based Adversarial Attack with Symmetric $α$-stable Distribution

no code implementations11 Apr 2019 Vignesh Srinivasan, Ercan E. Kuruoglu, Klaus-Robert Müller, Wojciech Samek, Shinichi Nakajima

Many existing methods employ Gaussian random variables for exploring the data space to find the most adversarial (for attacking) or least adversarial (for defense) point.

Adversarial Attack

Comment on "Solving Statistical Mechanics Using VANs": Introducing saVANt - VANs Enhanced by Importance and MCMC Sampling

no code implementations26 Mar 2019 Kim Nicoli, Pan Kessel, Nils Strodthoff, Wojciech Samek, Klaus-Robert Müller, Shinichi Nakajima

In this comment on "Solving Statistical Mechanics Using Variational Autoregressive Networks" by Wu et al., we propose a subtle yet powerful modification of their approach.

Robust and Communication-Efficient Federated Learning from Non-IID Data

1 code implementation7 Mar 2019 Felix Sattler, Simon Wiedemann, Klaus-Robert Müller, Wojciech Samek

Federated Learning allows multiple parties to jointly train a deep learning model on their combined data, without any of the participants having to reveal their local data to a centralized server.

Federated Learning

Unmasking Clever Hans Predictors and Assessing What Machines Really Learn

1 code implementation26 Feb 2019 Sebastian Lapuschkin, Stephan Wäldchen, Alexander Binder, Grégoire Montavon, Wojciech Samek, Klaus-Robert Müller

Current learning machines have successfully solved hard application problems, reaching high accuracy and displaying seemingly "intelligent" behavior.

Multi-Kernel Prediction Networks for Denoising of Burst Images

2 code implementations5 Feb 2019 Talmaj Marinč, Vignesh Srinivasan, Serhan Gül, Cornelius Hellge, Wojciech Samek

The advantages of our method are two fold: (a) the different sized kernels help in extracting different information from the image which results in better reconstruction and (b) kernel fusion assures retaining of the extracted information while maintaining computational efficiency.

Image Denoising

Entropy-Constrained Training of Deep Neural Networks

no code implementations18 Dec 2018 Simon Wiedemann, Arturo Marban, Klaus-Robert Müller, Wojciech Samek

We propose a general framework for neural network compression that is motivated by the Minimum Description Length (MDL) principle.

Neural Network Compression

Analyzing Neuroimaging Data Through Recurrent Deep Learning Models

1 code implementation23 Oct 2018 Armin W. Thomas, Hauke R. Heekeren, Klaus-Robert Müller, Wojciech Samek

We further demonstrate DeepLight's ability to study the fine-grained temporo-spatial variability of brain activity over sequences of single fMRI samples.

Compact and Computationally Efficient Representations of Deep Neural Networks

no code implementations NIPS Workshop CDNNRIA 2018 Simon Wiedemann, Klaus-Robert Mueller, Wojciech Samek

However, most of these common matrix storage formats make strong statistical assumptions about the distribution of the elements in the matrix, and can therefore not efficiently represent the entire set of matrices that exhibit low entropy statistics (thus, the entire set of compressed neural network weight matrices).

iNNvestigate neural networks!

1 code implementation13 Aug 2018 Maximilian Alber, Sebastian Lapuschkin, Philipp Seegerer, Miriam Hägele, Kristof T. Schütt, Grégoire Montavon, Wojciech Samek, Klaus-Robert Müller, Sven Dähne, Pieter-Jan Kindermans

The presented library iNNvestigate addresses this by providing a common interface and out-of-the- box implementation for many analysis methods, including the reference implementation for PatternNet and PatternAttribution as well as for LRP-methods.

Interpretable Machine Learning

Explaining the Unique Nature of Individual Gait Patterns with Deep Learning

1 code implementation13 Aug 2018 Fabian Horst, Sebastian Lapuschkin, Wojciech Samek, Klaus-Robert Müller, Wolfgang I. Schöllhorn

Machine learning (ML) techniques such as (deep) artificial neural networks (DNN) are solving very successfully a plethora of tasks and provide new predictive models for complex physical, chemical, biological and social systems.

Enhanced Machine Learning Techniques for Early HARQ Feedback Prediction in 5G

no code implementations27 Jul 2018 Nils Strodthoff, Barış Göktepe, Thomas Schierl, Cornelius Hellge, Wojciech Samek

We investigate Early Hybrid Automatic Repeat reQuest (E-HARQ) feedback schemes enhanced by machine learning techniques as a path towards ultra-reliable and low-latency communication (URLLC).

General Classification

Interpreting and Explaining Deep Neural Networks for Classification of Audio Signals

2 code implementations9 Jul 2018 Sören Becker, Marcel Ackermann, Sebastian Lapuschkin, Klaus-Robert Müller, Wojciech Samek

Interpretability of deep neural networks is a recently emerging area of machine learning research targeting a better understanding of how models perform feature selection and derive their classification decisions.

Audio Classification Decision Making +2

Understanding Patch-Based Learning by Explaining Predictions

no code implementations11 Jun 2018 Christopher Anders, Grégoire Montavon, Wojciech Samek, Klaus-Robert Müller

We apply the deep Taylor / LRP technique to understand the deep network's classification decisions, and identify a "border effect": a tendency of the classifier to look mainly at the bordering frames of the input.

General Classification

Compact and Computationally Efficient Representation of Deep Neural Networks

no code implementations27 May 2018 Simon Wiedemann, Klaus-Robert Müller, Wojciech Samek

These new matrix formats have the novel property that their memory and algorithmic complexity are implicitly bounded by the entropy of the matrix, consequently implying that they are guaranteed to become more efficient as the entropy of the matrix is being reduced.

A Recurrent Convolutional Neural Network Approach for Sensorless Force Estimation in Robotic Surgery

no code implementations22 May 2018 Arturo Marban, Vignesh Srinivasan, Wojciech Samek, Josep Fernández, Alicia Casals

The results suggest that the force estimation quality is better when both, the tool data and video sequences, are processed by the neural network model.

Sparse Binary Compression: Towards Distributed Deep Learning with minimal Communication

no code implementations22 May 2018 Felix Sattler, Simon Wiedemann, Klaus-Robert Müller, Wojciech Samek

A major issue in distributed training is the limited communication bandwidth between contributing nodes or prohibitive communication cost in general.


Dual Recurrent Attention Units for Visual Question Answering

1 code implementation1 Feb 2018 Ahmed Osman, Wojciech Samek

First, we introduce a baseline VQA model with visual attention and test the performance difference between convolutional and recurrent attention on the VQA 2. 0 dataset.

Question Answering Visual Question Answering

The Convergence of Machine Learning and Communications

no code implementations28 Aug 2017 Wojciech Samek, Slawomir Stanczak, Thomas Wiegand

The areas of machine learning and communication technology are converging.

Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models

no code implementations28 Aug 2017 Wojciech Samek, Thomas Wiegand, Klaus-Robert Müller

With the availability of large databases and recent improvements in deep learning methodology, the performance of AI systems is reaching or even exceeding the human level on an increasing number of complex tasks.

Explainable artificial intelligence General Classification +2

Discovering topics in text datasets by visualizing relevant words

1 code implementation18 Jul 2017 Franziska Horn, Leila Arras, Grégoire Montavon, Klaus-Robert Müller, Wojciech Samek

When dealing with large collections of documents, it is imperative to quickly get an overview of the texts' contents.

Exploring text datasets by visualizing relevant words

2 code implementations17 Jul 2017 Franziska Horn, Leila Arras, Grégoire Montavon, Klaus-Robert Müller, Wojciech Samek

When working with a new dataset, it is important to first explore and familiarize oneself with it, before applying any advanced machine learning algorithms.

Methods for Interpreting and Understanding Deep Neural Networks

no code implementations24 Jun 2017 Grégoire Montavon, Wojciech Samek, Klaus-Robert Müller

This paper provides an entry point to the problem of interpreting a deep neural network model and explaining its predictions.

Explaining Recurrent Neural Network Predictions in Sentiment Analysis

1 code implementation WS 2017 Leila Arras, Grégoire Montavon, Klaus-Robert Müller, Wojciech Samek

Recently, a technique called Layer-wise Relevance Propagation (LRP) was shown to deliver insightful explanations in the form of input space relevances for understanding feed-forward neural network classification decisions.

General Classification Interpretable Machine Learning +1

Interpreting the Predictions of Complex ML Models by Layer-wise Relevance Propagation

no code implementations24 Nov 2016 Wojciech Samek, Grégoire Montavon, Alexander Binder, Sebastian Lapuschkin, Klaus-Robert Müller

Complex nonlinear models such as deep neural network (DNNs) have become an important tool for image classification, speech recognition, natural language processing, and many other fields of application.

Classification General Classification +2

Object Boundary Detection and Classification with Image-level Labels

no code implementations29 Jun 2016 Jing Yu Koh, Wojciech Samek, Klaus-Robert Müller, Alexander Binder

We propose a novel strategy for solving this task, when pixel-level annotations are not available, performing it in an almost zero-shot manner by relying on conventional whole image neural net classifiers that were trained using large bounding boxes.

Boundary Detection Classification +2

Identifying individual facial expressions by deconstructing a neural network

no code implementations23 Jun 2016 Farhad Arbabzadah, Grégoire Montavon, Klaus-Robert Müller, Wojciech Samek

We further observe that the explanation method provides important insights into the nature of features of the base model, which allow one to assess the aptitude of the base model for a given transfer learning task.

Gender Prediction Transfer Learning

Explaining Predictions of Non-Linear Classifiers in NLP

1 code implementation WS 2016 Leila Arras, Franziska Horn, Grégoire Montavon, Klaus-Robert Müller, Wojciech Samek

Layer-wise relevance propagation (LRP) is a recently proposed technique for explaining predictions of complex non-linear classifiers in terms of input variables.

General Classification Image Classification

Layer-wise Relevance Propagation for Neural Networks with Local Renormalization Layers

no code implementations4 Apr 2016 Alexander Binder, Grégoire Montavon, Sebastian Bach, Klaus-Robert Müller, Wojciech Samek

Layer-wise relevance propagation is a framework which allows to decompose the prediction of a deep neural network computed over a sample, e. g. an image, down to relevance scores for the single input dimensions of the sample such as subpixels of an image.

Controlling Explanatory Heatmap Resolution and Semantics via Decomposition Depth

no code implementations21 Mar 2016 Sebastian Bach, Alexander Binder, Klaus-Robert Müller, Wojciech Samek

We present an application of the Layer-wise Relevance Propagation (LRP) algorithm to state of the art deep convolutional neural networks and Fisher Vector classifiers to compare the image perception and prediction strategies of both classifiers with the use of visualized heatmaps.

Explaining NonLinear Classification Decisions with Deep Taylor Decomposition

4 code implementations8 Dec 2015 Grégoire Montavon, Sebastian Bach, Alexander Binder, Wojciech Samek, Klaus-Robert Müller

Although our focus is on image classification, the method is applicable to a broad set of input data, learning tasks and network architectures.

Action Recognition Classification +2

Evaluating the visualization of what a Deep Neural Network has learned

1 code implementation21 Sep 2015 Wojciech Samek, Alexander Binder, Grégoire Montavon, Sebastian Bach, Klaus-Robert Müller

Our main result is that the recently proposed Layer-wise Relevance Propagation (LRP) algorithm qualitatively and quantitatively provides a better explanation of what made a DNN arrive at a particular classification decision than the sensitivity-based approach or the deconvolution method.

Classification General Classification +2

Robust Spatial Filtering with Beta Divergence

no code implementations NeurIPS 2013 Wojciech Samek, Duncan Blythe, Klaus-Robert Müller, Motoaki Kawanabe

The efficiency of Brain-Computer Interfaces (BCI) largely depends upon a reliable extraction of informative features from the high-dimensional EEG signal.


Multiple Kernel Learning for Brain-Computer Interfacing

no code implementations22 Oct 2013 Wojciech Samek, Alexander Binder, Klaus-Robert Müller

Combining information from different sources is a common way to improve classification accuracy in Brain-Computer Interfacing (BCI).

General Classification

Transferring Subspaces Between Subjects in Brain-Computer Interfacing

no code implementations18 Sep 2012 Wojciech Samek, Frank C. Meinecke, Klaus-Robert Müller

Compensating changes between a subjects' training and testing session in Brain Computer Interfacing (BCI) is challenging but of great importance for a robust BCI operation.


Cannot find the paper you are looking for? You can Submit a new open access paper.