Search Results for author: Matthias Bethge

Found 82 papers, 44 papers with code

Disentanglement and Generalization Under Correlation Shifts

no code implementations29 Dec 2021 Christina M. Funke, Paul Vicol, Kuan-Chieh Wang, Matthias Kümmerer, Richard Zemel, Matthias Bethge

However, often such correlations are not robust (e. g., they may change between domains, datasets, or applications) and we wish to avoid exploiting them.

Disentanglement

The Geometry of Adversarial Subspaces

no code implementations29 Sep 2021 Dylan M. Paiton, David Schultheiss, Matthias Kuemmerer, Zac Cranko, Matthias Bethge

We undertake analysis to characterize the geometry of the boundary, which is more curved within the adversarial subspace than within a random subspace of equal dimensionality.

A Broad Dataset is All You Need for One-Shot Object Detection

no code implementations29 Sep 2021 Claudio Michaelis, Matthias Bethge, Alexander S Ecker

We here show that this generalization gap can be nearly closed by increasing the number of object categories used during training.

Few-Shot Learning Metric Learning +2

If your data distribution shifts, use self-learning

no code implementations29 Sep 2021 Evgenia Rusak, Steffen Schneider, George Pachitariu, Luisa Eck, Peter Vincent Gehler, Oliver Bringmann, Wieland Brendel, Matthias Bethge

In this paper, we demonstrate that self-learning techniques like entropy minimization or pseudo-labeling are simple, yet effective techniques for increasing test performance under domain shifts.

Self-Learning

Visual Representation Learning Does Not Generalize Strongly Within the Same Domain

1 code implementation ICLR 2022 Lukas Schott, Julius von Kügelgen, Frederik Träuble, Peter Gehler, Chris Russell, Matthias Bethge, Bernhard Schölkopf, Francesco Locatello, Wieland Brendel

An important component for generalization in machine learning is to uncover underlying latent factors of variation as well as the mechanism through which each factor acts in the world.

Representation Learning

How Well do Feature Visualizations Support Causal Understanding of CNN Activations?

1 code implementation NeurIPS 2021 Roland S. Zimmermann, Judy Borowski, Robert Geirhos, Matthias Bethge, Thomas S. A. Wallis, Wieland Brendel

A precise understanding of why units in an artificial network respond to certain stimuli would constitute a big step towards explainable artificial intelligence.

Explainable artificial intelligence

Partial success in closing the gap between human and machine vision

1 code implementation NeurIPS 2021 Robert Geirhos, Kantharaju Narayanappa, Benjamin Mitzkus, Tizian Thieringer, Matthias Bethge, Felix A. Wichmann, Wieland Brendel

The longstanding distortion robustness gap between humans and CNNs is closing, with the best models now exceeding human feedforward performance on most of the investigated OOD datasets.

Image Classification

DeepGaze IIE: Calibrated prediction in and out-of-domain for state-of-the-art saliency modeling

1 code implementation ICCV 2021 Akis Linardos, Matthias Kümmerer, Ori Press, Matthias Bethge

Since 2014 transfer learning has become the key driver for the improvement of spatial saliency prediction; however, with stagnant progress in the last 3-5 years.

Saliency Prediction Transfer Learning

Adapting ImageNet-scale models to complex distribution shifts with self-learning

1 code implementation27 Apr 2021 Evgenia Rusak, Steffen Schneider, Peter Gehler, Oliver Bringmann, Wieland Brendel, Matthias Bethge

We therefore re-purpose the dataset from the Visual Domain Adaptation Challenge 2019 and use a subset of it as a new robustness benchmark (ImageNet-D) which proves to be a more challenging dataset for all current state-of-the-art models (58. 2% error) to guide future research efforts at the intersection of robustness and domain adaptation on ImageNet scale.

 Ranked #1 on Unsupervised Domain Adaptation on ImageNet-C (using extra training data)

Robust classification Self-Learning +1

State-of-the-Art in Human Scanpath Prediction

no code implementations24 Feb 2021 Matthias Kümmerer, Matthias Bethge

The last years have seen a surge in models predicting the scanpaths of fixations made by humans when viewing images.

Scanpath prediction

Exemplary natural images explain CNN activations better than synthetic feature visualizations

no code implementations ICLR 2021 Judy Borowski, Roland Simon Zimmermann, Judith Schepers, Robert Geirhos, Thomas S. A. Wallis, Matthias Bethge, Wieland Brendel

Using a well-controlled psychophysical paradigm, we compare the informativeness of synthetic images \citep{olah2017feature} with a simple baseline visualization, namely exemplary natural images that also strongly activate a specific feature map.

Informativeness

System Identification with Biophysical Constraints: A Circuit Model of the Inner Retina

1 code implementation NeurIPS 2020 Cornelius Schröder, David Klindt, Sarah Strauss, Katrin Franke, Matthias Bethge, Thomas Euler, Philipp Berens

Here, we present a computational model of temporal processing in the inner retina, including inhibitory feedback circuits and realistic synaptic release mechanisms.

Closing the Generalization Gap in One-Shot Object Detection

no code implementations9 Nov 2020 Claudio Michaelis, Matthias Bethge, Alexander S. Ecker

Despite substantial progress in object detection and few-shot learning, detecting objects based on a single example - one-shot object detection - remains a challenge: trained models exhibit a substantial generalization gap, where object categories used during training are detected much more reliably than novel ones.

Few-Shot Learning Metric Learning +2

Natural Images are More Informative for Interpreting CNN Activations than State-of-the-Art Synthetic Feature Visualizations

no code implementations NeurIPS Workshop SVRHM 2020 Judy Borowski, Roland Simon Zimmermann, Judith Schepers, Robert Geirhos, Thomas S. A. Wallis, Matthias Bethge, Wieland Brendel

Using a well-controlled psychophysical paradigm, we compare the informativeness of synthetic images by Olah et al. [45] with a simple baseline visualization, namely natural images that also strongly activate a specific feature map.

Informativeness

EagerPy: Writing Code That Works Natively with PyTorch, TensorFlow, JAX, and NumPy

1 code implementation10 Aug 2020 Jonas Rauber, Matthias Bethge, Wieland Brendel

EagerPy is a Python framework that lets you write code that automatically works natively with PyTorch, TensorFlow, JAX, and NumPy.

Fast Differentiable Clipping-Aware Normalization and Rescaling

1 code implementation15 Jul 2020 Jonas Rauber, Matthias Bethge

When the rescaled perturbation $\eta \vec{\delta}$ is added to a starting point $\vec{x} \in D$ (where $D$ is the data domain, e. g. $D = [0, 1]^n$), the resulting vector $\vec{v} = \vec{x} + \eta \vec{\delta}$ will in general not be in $D$.

Benchmarking Unsupervised Object Representations for Video Sequences

1 code implementation12 Jun 2020 Marissa A. Weis, Kashyap Chitta, Yash Sharma, Wieland Brendel, Matthias Bethge, Andreas Geiger, Alexander S. Ecker

Perceiving the world in terms of objects and tracking them through time is a crucial prerequisite for reasoning and scene understanding.

Multi-Object Tracking object-detection +2

Rotation-invariant clustering of neuronal responses in primary visual cortex

no code implementations ICLR 2020 Ivan Ustyuzhaninov, Santiago A. Cadena, Emmanouil Froudarakis, Paul G. Fahey, Edgar Y. Walker, Erick Cobos, Jacob Reimer, Fabian H. Sinz, Andreas S. Tolias, Matthias Bethge, Alexander S. Ecker

Similar to a convolutional neural network (CNN), the mammalian retina encodes visual information into several dozen nonlinear feature maps, each formed by one ganglion cell type that tiles the visual space in an approximately shift-equivariant manner.

Towards causal generative scene models via competition of experts

no code implementations27 Apr 2020 Julius von Kügelgen, Ivan Ustyuzhaninov, Peter Gehler, Matthias Bethge, Bernhard Schölkopf

Learning how to model complex scenes in a modular way with recombinable components is a pre-requisite for higher-order reasoning and acting in the physical world.

Inductive Bias

Shortcut Learning in Deep Neural Networks

2 code implementations16 Apr 2020 Robert Geirhos, Jörn-Henrik Jacobsen, Claudio Michaelis, Richard Zemel, Wieland Brendel, Matthias Bethge, Felix A. Wichmann

Deep learning has triggered the current rise of artificial intelligence and is the workhorse of today's machine intelligence.

A simple way to make neural networks robust against diverse image corruptions

3 code implementations ECCV 2020 Evgenia Rusak, Lukas Schott, Roland S. Zimmermann, Julian Bitterwolf, Oliver Bringmann, Matthias Bethge, Wieland Brendel

The human visual system is remarkably robust against a wide range of naturally occurring variations and corruptions like rain or snow.

How well do deep neural networks trained on object recognition characterize the mouse visual system?

no code implementations NeurIPS Workshop Neuro_AI 2019 Santiago A. Cadena, Fabian H. Sinz, Taliah Muhammad, Emmanouil Froudarakis, Erick Cobos, Edgar Y. Walker, Jake Reimer, Matthias Bethge, Andreas Tolias, Alexander S. Ecker

Recent work on modeling neural responses in the primate visual system has benefited from deep neural networks trained on large-scale object recognition, and found a hierarchical correspondence between layers of the artificial neural network and brain areas along the ventral visual stream.

Object Recognition

Accurate, reliable and fast robustness evaluation

1 code implementation NeurIPS 2019 Wieland Brendel, Jonas Rauber, Matthias Kümmerer, Ivan Ustyuzhaninov, Matthias Bethge

We here develop a new set of gradient-based adversarial attacks which (a) are more reliable in the face of gradient-masking than other gradient-based attacks, (b) perform better and are more query efficient than current state-of-the-art gradient-based attacks, (c) can be flexibly adapted to a wide range of adversarial criteria and (d) require virtually no hyperparameter tuning.

Approximating CNNs with Bag-of-local-Features models works surprisingly well on ImageNet

4 code implementations ICLR 2019 Wieland Brendel, Matthias Bethge

Deep Neural Networks (DNNs) excel on many complex perceptual tasks but it has proven notoriously difficult to understand how they reach their decisions.

One-Shot Instance Segmentation

3 code implementations28 Nov 2018 Claudio Michaelis, Ivan Ustyuzhaninov, Matthias Bethge, Alexander S. Ecker

We demonstrate empirical results on MS Coco highlighting challenges of the one-shot setting: while transferring knowledge about instance segmentation to novel object categories works very well, targeting the detection network towards the reference category appears to be more difficult.

Few-Shot Object Detection One-Shot Instance Segmentation +2

Excessive Invariance Causes Adversarial Vulnerability

no code implementations ICLR 2019 Jörn-Henrik Jacobsen, Jens Behrmann, Richard Zemel, Matthias Bethge

Despite their impressive performance, deep neural networks exhibit striking failures on out-of-distribution inputs.

A rotation-equivariant convolutional neural network model of primary visual cortex

1 code implementation ICLR 2019 Alexander S. Ecker, Fabian H. Sinz, Emmanouil Froudarakis, Paul G. Fahey, Santiago A. Cadena, Edgar Y. Walker, Erick Cobos, Jacob Reimer, Andreas S. Tolias, Matthias Bethge

We present a framework to identify common features independent of individual neurons' orientation selectivity by using a rotation-equivariant convolutional neural network, which automatically extracts every feature at multiple different orientations.

Generalisation in humans and deep neural networks

2 code implementations NeurIPS 2018 Robert Geirhos, Carlos R. Medina Temme, Jonas Rauber, Heiko H. Schütt, Matthias Bethge, Felix A. Wichmann

We compare the robustness of humans and current convolutional deep neural networks (DNNs) on object recognition under twelve different types of image degradations.

Object Recognition

Adversarial Vision Challenge

2 code implementations6 Aug 2018 Wieland Brendel, Jonas Rauber, Alexey Kurakin, Nicolas Papernot, Behar Veliqi, Marcel Salathé, Sharada P. Mohanty, Matthias Bethge

The NIPS 2018 Adversarial Vision Challenge is a competition to facilitate measurable progress towards robust machine vision models and more generally applicable adversarial attacks.

One-shot Texture Segmentation

4 code implementations7 Jul 2018 Ivan Ustyuzhaninov, Claudio Michaelis, Wieland Brendel, Matthias Bethge

We introduce one-shot texture segmentation: the task of segmenting an input image containing multiple textures given a patch of a reference texture.

Towards the first adversarially robust neural network model on MNIST

3 code implementations ICLR 2019 Lukas Schott, Jonas Rauber, Matthias Bethge, Wieland Brendel

Despite much effort, deep neural networks remain highly susceptible to tiny input perturbations and even for MNIST, one of the most common toy datasets in computer vision, no neural network model exists for which adversarial perturbations are large and make semantic sense to humans.

Adversarial Robustness Binarization +1

One-Shot Segmentation in Clutter

1 code implementation ICML 2018 Claudio Michaelis, Matthias Bethge, Alexander S. Ecker

We tackle the problem of one-shot segmentation: finding and segmenting a previously unseen object in a cluttered scene based on a single instruction example.

object-detection One-Shot Segmentation

Trace your sources in large-scale data: one ring to find them all

1 code implementation23 Mar 2018 Alexander Böttcher, Wieland Brendel, Bernhard Englitz, Matthias Bethge

An important preprocessing step in most data analysis pipelines aims to extract a small set of sources that explain most of the data.

Guiding human gaze with convolutional neural networks

no code implementations18 Dec 2017 Leon A. Gatys, Matthias Kümmerer, Thomas S. A. Wallis, Matthias Bethge

Thus, manipulating fixation patterns to guide human attention is an exciting challenge in digital image processing.

Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models

6 code implementations ICLR 2018 Wieland Brendel, Jonas Rauber, Matthias Bethge

Such decision-based attacks are (1) applicable to real-world black-box models such as autonomous cars, (2) need less knowledge and are easier to apply than transfer-based attacks and (3) are more robust to simple defences than gradient- or score-based attacks.

BIG-bench Machine Learning

Neural system identification for large populations separating “what” and “where”

1 code implementation NeurIPS 2017 David Klindt, Alexander S. Ecker, Thomas Euler, Matthias Bethge

Traditional methods for neural system identification do not capitalize on this separation of “what” and “where”.

Understanding Low- and High-Level Contributions to Fixation Prediction

no code implementations ICCV 2017 Matthias Kummerer, Thomas S. A. Wallis, Leon A. Gatys, Matthias Bethge

This model achieves better performance than all models not using features pre-trained on object recognition, making it a strong baseline to assess the utility of high-level features.

Object Recognition Saliency Prediction

Foolbox: A Python toolbox to benchmark the robustness of machine learning models

6 code implementations13 Jul 2017 Jonas Rauber, Wieland Brendel, Matthias Bethge

Foolbox is a new Python package to generate such adversarial perturbations and to quantify and compare the robustness of machine learning models.

Adversarial Attack BIG-bench Machine Learning

Comparing deep neural networks against humans: object recognition when the signal gets weaker

1 code implementation21 Jun 2017 Robert Geirhos, David H. J. Janssen, Heiko H. Schütt, Jonas Rauber, Matthias Bethge, Felix A. Wichmann

In addition, we find progressively diverging classification error-patterns between humans and DNNs when the signal gets weaker, indicating that there may still be marked differences in the way humans and current DNNs perform visual object recognition.

General Classification Object Recognition

Comment on "Biologically inspired protection of deep networks from adversarial attacks"

no code implementations5 Apr 2017 Wieland Brendel, Matthias Bethge

A recent paper suggests that Deep Neural Networks can be protected from gradient-based adversarial perturbations by driving the network activations into a highly saturated regime.

Preserving Color in Neural Artistic Style Transfer

7 code implementations19 Jun 2016 Leon A. Gatys, Matthias Bethge, Aaron Hertzmann, Eli Shechtman

This note presents an extension to the neural artistic style transfer algorithm (Gatys et al.).

Style Transfer

Texture Synthesis Using Shallow Convolutional Networks with Random Filters

no code implementations31 May 2016 Ivan Ustyuzhaninov, Wieland Brendel, Leon A. Gatys, Matthias Bethge

The current state of the art in parametric texture synthesis relies on the multi-layer feature space of deep CNNs that were trained on natural images.

Texture Synthesis

Signatures of criticality arise in simple neural population models with correlations

1 code implementation29 Feb 2016 Marcel Nonnenmacher, Christian Behrens, Philipp Berens, Matthias Bethge, Jakob H. Macke

Support for this notion has come from a series of studies which identified statistical signatures of criticality in the ensemble activity of retinal ganglion cells.

Neurons and Cognition

A note on the evaluation of generative models

1 code implementation5 Nov 2015 Lucas Theis, Aäron van den Oord, Matthias Bethge

In particular, we show that three of the currently most commonly used criteria---average log-likelihood, Parzen window estimates, and visual fidelity of samples---are largely independent of each other when the data is high-dimensional.

Denoising Texture Synthesis

A Neural Algorithm of Artistic Style

284 code implementations26 Aug 2015 Leon A. Gatys, Alexander S. Ecker, Matthias Bethge

In fine art, especially painting, humans have mastered the skill to create unique visual experiences through composing a complex interplay between the content and style of an image.

Style Transfer

Generative Image Modeling Using Spatial LSTMs

no code implementations NeurIPS 2015 Lucas Theis, Matthias Bethge

Modeling the distribution of natural images is challenging, partly because of strong statistical dependencies which can extend over hundreds of pixels.

Ranked #51 on Image Generation on CIFAR-10 (bits/dimension metric)

Image Generation Texture Synthesis

A Generative Model of Natural Texture Surrogates

no code implementations28 May 2015 Niklas Ludtke, Debapriya Das, Lucas Theis, Matthias Bethge

In order to model this variability, we first applied the parametric texture algorithm of Portilla and Simoncelli to image patches of 64X64 pixels in a large database of natural images such that each image patch is then described by 655 texture parameters which specify certain statistics, such as variances and covariances of wavelet coefficients or coefficient magnitudes within that patch.

Image Compression

Texture Synthesis Using Convolutional Neural Networks

14 code implementations NeurIPS 2015 Leon A. Gatys, Alexander S. Ecker, Matthias Bethge

Here we introduce a new model of natural textures based on the feature spaces of convolutional neural networks optimised for object recognition.

Object Recognition Texture Synthesis

Supervised learning sets benchmark for robust spike detection from calcium imaging signals

no code implementations28 Feb 2015 Lucas Theis, Philipp Berens, Emmanouil Froudarakis, Jacob Reimer, Miroslav Román Rosón, Tom Baden, Thomas Euler, Andreas Tolias, Matthias Bethge

A fundamental challenge in calcium imaging has been to infer the timing of action potentials from the measured noisy calcium fluorescence traces.

Deep Gaze I: Boosting Saliency Prediction with Feature Maps Trained on ImageNet

1 code implementation4 Nov 2014 Matthias Kümmerer, Lucas Theis, Matthias Bethge

Recent results suggest that state-of-the-art saliency models perform far from optimal in predicting fixations.

Object Recognition Point Processes +1

How close are we to understanding image-based saliency?

no code implementations26 Sep 2014 Matthias Kümmerer, Thomas Wallis, Matthias Bethge

Within the set of the many complex factors driving gaze placement, the properities of an image that are associated with fixations under free viewing conditions have been studied extensively.

Point Processes

Training sparse natural image models with a fast Gibbs sampler of an extended state space

no code implementations NeurIPS 2012 Lucas Theis, Jascha Sohl-Dickstein, Matthias Bethge

We present a new learning strategy based on an efficient blocked Gibbs sampler for sparse overcomplete linear models.

Evaluating neuronal codes for inference using Fisher information

no code implementations NeurIPS 2010 Haefner Ralf, Matthias Bethge

We characterize the response distribution for the binocular energy model in response to random dot stereograms and find it to be very different from the Poisson-like noise usually assumed.

Bayesian estimation of orientation preference maps

no code implementations NeurIPS 2009 Sebastian Gerwinn, Leonard White, Matthias Kaschube, Matthias Bethge, Jakob H. Macke

Imaging techniques such as optical imaging of intrinsic signals, 2-photon calcium imaging and voltage sensitive dye imaging can be used to measure the functional organization of visual cortex across different spatial scales.

Gaussian Processes

A joint maximum-entropy model for binary neural population patterns and continuous signals

no code implementations NeurIPS 2009 Sebastian Gerwinn, Philipp Berens, Matthias Bethge

Second-order maximum-entropy models have recently gained much interest for describing the statistics of binary spike trains.

Hierarchical Modeling of Local Image Features through L_p-Nested Symmetric Distributions

no code implementations NeurIPS 2009 Matthias Bethge, Eero P. Simoncelli, Fabian H. Sinz

We introduce a new family of distributions, called $L_p${\em -nested symmetric distributions}, whose densities access the data exclusively through a hierarchical cascade of $L_p$-norms.

Neurometric function analysis of population codes

no code implementations NeurIPS 2009 Philipp Berens, Sebastian Gerwinn, Alexander Ecker, Matthias Bethge

In this way, we provide a new rigorous framework for assessing the functional consequences of noise correlation structures for the representational accuracy of neural population codes that is in particular applicable to short-time population coding.

The Conjoint Effect of Divisive Normalization and Orientation Selectivity on Redundancy Reduction

no code implementations NeurIPS 2008 Fabian H. Sinz, Matthias Bethge

Bandpass filtering, orientation selectivity, and contrast gain control are prominent features of sensory coding at the level of V1 simple cells.

Receptive Fields without Spike-Triggering

no code implementations NeurIPS 2007 Guenther Zeck, Matthias Bethge, Jakob H. Macke

Can we find a concise description for the processing of a whole population of neurons analogous to the receptive field for single neurons?

Image Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.