Search Results for author: Matthias Bethge

Found 94 papers, 54 papers with code

A Neural Algorithm of Artistic Style

285 code implementations26 Aug 2015 Leon A. Gatys, Alexander S. Ecker, Matthias Bethge

In fine art, especially painting, humans have mastered the skill to create unique visual experiences through composing a complex interplay between the content and style of an image.

Style Transfer

Preserving Color in Neural Artistic Style Transfer

7 code implementations19 Jun 2016 Leon A. Gatys, Matthias Bethge, Aaron Hertzmann, Eli Shechtman

This note presents an extension to the neural artistic style transfer algorithm (Gatys et al.).

Style Transfer

Approximating CNNs with Bag-of-local-Features models works surprisingly well on ImageNet

4 code implementations ICLR 2019 Wieland Brendel, Matthias Bethge

Deep Neural Networks (DNNs) excel on many complex perceptual tasks but it has proven notoriously difficult to understand how they reach their decisions.

Foolbox: A Python toolbox to benchmark the robustness of machine learning models

6 code implementations13 Jul 2017 Jonas Rauber, Wieland Brendel, Matthias Bethge

Foolbox is a new Python package to generate such adversarial perturbations and to quantify and compare the robustness of machine learning models.

Adversarial Attack BIG-bench Machine Learning

Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models

6 code implementations ICLR 2018 Wieland Brendel, Jonas Rauber, Matthias Bethge

Such decision-based attacks are (1) applicable to real-world black-box models such as autonomous cars, (2) need less knowledge and are easier to apply than transfer-based attacks and (3) are more robust to simple defences than gradient- or score-based attacks.

BIG-bench Machine Learning

Texture Synthesis Using Convolutional Neural Networks

15 code implementations NeurIPS 2015 Leon A. Gatys, Alexander S. Ecker, Matthias Bethge

Here we introduce a new model of natural textures based on the feature spaces of convolutional neural networks optimised for object recognition.

Object Object Recognition +1

A simple way to make neural networks robust against diverse image corruptions

3 code implementations ECCV 2020 Evgenia Rusak, Lukas Schott, Roland S. Zimmermann, Julian Bitterwolf, Oliver Bringmann, Matthias Bethge, Wieland Brendel

The human visual system is remarkably robust against a wide range of naturally occurring variations and corruptions like rain or snow.

Generalisation in humans and deep neural networks

2 code implementations NeurIPS 2018 Robert Geirhos, Carlos R. Medina Temme, Jonas Rauber, Heiko H. Schütt, Matthias Bethge, Felix A. Wichmann

We compare the robustness of humans and current convolutional deep neural networks (DNNs) on object recognition under twelve different types of image degradations.

Object Recognition

EagerPy: Writing Code That Works Natively with PyTorch, TensorFlow, JAX, and NumPy

1 code implementation10 Aug 2020 Jonas Rauber, Matthias Bethge, Wieland Brendel

EagerPy is a Python framework that lets you write code that automatically works natively with PyTorch, TensorFlow, JAX, and NumPy.

Benchmarking Robustness in Object Detection: Autonomous Driving when Winter is Coming

4 code implementations17 Jul 2019 Claudio Michaelis, Benjamin Mitzkus, Robert Geirhos, Evgenia Rusak, Oliver Bringmann, Alexander S. Ecker, Matthias Bethge, Wieland Brendel

The ability to detect objects regardless of image distortions or weather conditions is crucial for real-world applications of deep learning like autonomous driving.

Autonomous Driving Benchmarking +5

One-Shot Instance Segmentation

3 code implementations28 Nov 2018 Claudio Michaelis, Ivan Ustyuzhaninov, Matthias Bethge, Alexander S. Ecker

We demonstrate empirical results on MS Coco highlighting challenges of the one-shot setting: while transferring knowledge about instance segmentation to novel object categories works very well, targeting the detection network towards the reference category appears to be more difficult.

Few-Shot Object Detection One-Shot Instance Segmentation +3

Partial success in closing the gap between human and machine vision

1 code implementation NeurIPS 2021 Robert Geirhos, Kantharaju Narayanappa, Benjamin Mitzkus, Tizian Thieringer, Matthias Bethge, Felix A. Wichmann, Wieland Brendel

The longstanding distortion robustness gap between humans and CNNs is closing, with the best models now exceeding human feedforward performance on most of the investigated OOD datasets.

Image Classification

If your data distribution shifts, use self-learning

1 code implementation27 Apr 2021 Evgenia Rusak, Steffen Schneider, George Pachitariu, Luisa Eck, Peter Gehler, Oliver Bringmann, Wieland Brendel, Matthias Bethge

We demonstrate that self-learning techniques like entropy minimization and pseudo-labeling are simple and effective at improving performance of a deployed computer vision model under systematic domain shifts.

 Ranked #1 on Unsupervised Domain Adaptation on ImageNet-A (using extra training data)

Robust classification Self-Learning +1

Deep Gaze I: Boosting Saliency Prediction with Feature Maps Trained on ImageNet

1 code implementation4 Nov 2014 Matthias Kümmerer, Lucas Theis, Matthias Bethge

Recent results suggest that state-of-the-art saliency models perform far from optimal in predicting fixations.

Object Recognition Point Processes +1

DeepGaze IIE: Calibrated prediction in and out-of-domain for state-of-the-art saliency modeling

2 code implementations ICCV 2021 Akis Linardos, Matthias Kümmerer, Ori Press, Matthias Bethge

Since 2014 transfer learning has become the key driver for the improvement of spatial saliency prediction; however, with stagnant progress in the last 3-5 years.

Saliency Prediction Transfer Learning

Shortcut Learning in Deep Neural Networks

2 code implementations16 Apr 2020 Robert Geirhos, Jörn-Henrik Jacobsen, Claudio Michaelis, Richard Zemel, Wieland Brendel, Matthias Bethge, Felix A. Wichmann

Deep learning has triggered the current rise of artificial intelligence and is the workhorse of today's machine intelligence.

Benchmarking

Towards the first adversarially robust neural network model on MNIST

3 code implementations ICLR 2019 Lukas Schott, Jonas Rauber, Matthias Bethge, Wieland Brendel

Despite much effort, deep neural networks remain highly susceptible to tiny input perturbations and even for MNIST, one of the most common toy datasets in computer vision, no neural network model exists for which adversarial perturbations are large and make semantic sense to humans.

Adversarial Robustness Binarization +1

One-Shot Segmentation in Clutter

1 code implementation ICML 2018 Claudio Michaelis, Matthias Bethge, Alexander S. Ecker

We tackle the problem of one-shot segmentation: finding and segmenting a previously unseen object in a cluttered scene based on a single instruction example.

Foreground Segmentation object-detection +2

Trace your sources in large-scale data: one ring to find them all

1 code implementation23 Mar 2018 Alexander Böttcher, Wieland Brendel, Bernhard Englitz, Matthias Bethge

An important preprocessing step in most data analysis pipelines aims to extract a small set of sources that explain most of the data.

blind source separation

One-shot Texture Segmentation

4 code implementations7 Jul 2018 Ivan Ustyuzhaninov, Claudio Michaelis, Wieland Brendel, Matthias Bethge

We introduce one-shot texture segmentation: the task of segmenting an input image containing multiple textures given a patch of a reference texture.

Segmentation

Adversarial Vision Challenge

2 code implementations6 Aug 2018 Wieland Brendel, Jonas Rauber, Alexey Kurakin, Nicolas Papernot, Behar Veliqi, Marcel Salathé, Sharada P. Mohanty, Matthias Bethge

The NIPS 2018 Adversarial Vision Challenge is a competition to facilitate measurable progress towards robust machine vision models and more generally applicable adversarial attacks.

Benchmarking Unsupervised Object Representations for Video Sequences

1 code implementation12 Jun 2020 Marissa A. Weis, Kashyap Chitta, Yash Sharma, Wieland Brendel, Matthias Bethge, Andreas Geiger, Alexander S. Ecker

Perceiving the world in terms of objects and tracking them through time is a crucial prerequisite for reasoning and scene understanding.

Benchmarking Clustering +5

Comparing deep neural networks against humans: object recognition when the signal gets weaker

1 code implementation21 Jun 2017 Robert Geirhos, David H. J. Janssen, Heiko H. Schütt, Jonas Rauber, Matthias Bethge, Felix A. Wichmann

In addition, we find progressively diverging classification error-patterns between humans and DNNs when the signal gets weaker, indicating that there may still be marked differences in the way humans and current DNNs perform visual object recognition.

General Classification Object +1

Visual Representation Learning Does Not Generalize Strongly Within the Same Domain

1 code implementation ICLR 2022 Lukas Schott, Julius von Kügelgen, Frederik Träuble, Peter Gehler, Chris Russell, Matthias Bethge, Bernhard Schölkopf, Francesco Locatello, Wieland Brendel

An important component for generalization in machine learning is to uncover underlying latent factors of variation as well as the mechanism through which each factor acts in the world.

Representation Learning

Neural system identification for large populations separating “what” and “where”

1 code implementation NeurIPS 2017 David Klindt, Alexander S. Ecker, Thomas Euler, Matthias Bethge

Traditional methods for neural system identification do not capitalize on this separation of “what” and “where”.

No "Zero-Shot" Without Exponential Data: Pretraining Concept Frequency Determines Multimodal Model Performance

1 code implementation4 Apr 2024 Vishaal Udandarao, Ameya Prabhu, Adhiraj Ghosh, Yash Sharma, Philip H. S. Torr, Adel Bibi, Samuel Albanie, Matthias Bethge

Web-crawled pretraining datasets underlie the impressive "zero-shot" evaluation performance of multimodal models, such as CLIP for classification/retrieval and Stable-Diffusion for image generation.

Benchmarking Image Generation +1

Signatures of criticality arise in simple neural population models with correlations

1 code implementation29 Feb 2016 Marcel Nonnenmacher, Christian Behrens, Philipp Berens, Matthias Bethge, Jakob H. Macke

Support for this notion has come from a series of studies which identified statistical signatures of criticality in the ensemble activity of retinal ganglion cells.

Neurons and Cognition

A rotation-equivariant convolutional neural network model of primary visual cortex

1 code implementation ICLR 2019 Alexander S. Ecker, Fabian H. Sinz, Emmanouil Froudarakis, Paul G. Fahey, Santiago A. Cadena, Edgar Y. Walker, Erick Cobos, Jacob Reimer, Andreas S. Tolias, Matthias Bethge

We present a framework to identify common features independent of individual neurons' orientation selectivity by using a rotation-equivariant convolutional neural network, which automatically extracts every feature at multiple different orientations.

Visual Data-Type Understanding does not emerge from Scaling Vision-Language Models

1 code implementation12 Oct 2023 Vishaal Udandarao, Max F. Burg, Samuel Albanie, Matthias Bethge

This finding points to a blind spot in current frontier VLMs: they excel in recognizing semantic content but fail to acquire an understanding of visual data-types through scaling.

How Well do Feature Visualizations Support Causal Understanding of CNN Activations?

1 code implementation NeurIPS 2021 Roland S. Zimmermann, Judy Borowski, Robert Geirhos, Matthias Bethge, Thomas S. A. Wallis, Wieland Brendel

A precise understanding of why units in an artificial network respond to certain stimuli would constitute a big step towards explainable artificial intelligence.

Explainable artificial intelligence

A note on the evaluation of generative models

1 code implementation5 Nov 2015 Lucas Theis, Aäron van den Oord, Matthias Bethge

In particular, we show that three of the currently most commonly used criteria---average log-likelihood, Parzen window estimates, and visual fidelity of samples---are largely independent of each other when the data is high-dimensional.

Denoising Texture Synthesis

Accurate, reliable and fast robustness evaluation

1 code implementation NeurIPS 2019 Wieland Brendel, Jonas Rauber, Matthias Kümmerer, Ivan Ustyuzhaninov, Matthias Bethge

We here develop a new set of gradient-based adversarial attacks which (a) are more reliable in the face of gradient-masking than other gradient-based attacks, (b) perform better and are more query efficient than current state-of-the-art gradient-based attacks, (c) can be flexibly adapted to a wide range of adversarial criteria and (d) require virtually no hyperparameter tuning.

Modulated Neural ODEs

1 code implementation NeurIPS 2023 Ilze Amanda Auzina, Çağatay Yıldız, Sara Magliacane, Matthias Bethge, Efstratios Gavves

Neural ordinary differential equations (NODEs) have been proven useful for learning non-linear dynamics of arbitrary trajectories.

Lifelong Benchmarks: Efficient Model Evaluation in an Era of Rapid Progress

1 code implementation29 Feb 2024 Ameya Prabhu, Vishaal Udandarao, Philip Torr, Matthias Bethge, Adel Bibi, Samuel Albanie

However, with repeated testing, the risk of overfitting grows as algorithms over-exploit benchmark idiosyncrasies.

Benchmarking

Fast Differentiable Clipping-Aware Normalization and Rescaling

1 code implementation15 Jul 2020 Jonas Rauber, Matthias Bethge

When the rescaled perturbation $\eta \vec{\delta}$ is added to a starting point $\vec{x} \in D$ (where $D$ is the data domain, e. g. $D = [0, 1]^n$), the resulting vector $\vec{v} = \vec{x} + \eta \vec{\delta}$ will in general not be in $D$.

Visual cognition in multimodal large language models

1 code implementation27 Nov 2023 Luca M. Schulze Buschoff, Elif Akata, Matthias Bethge, Eric Schulz

A chief goal of artificial intelligence is to build machines that think like people.

Infinite dSprites for Disentangled Continual Learning: Separating Memory Edits from Generalization

1 code implementation27 Dec 2023 Sebastian Dziadzio, Çağatay Yıldız, Gido M. van de Ven, Tomasz Trzciński, Tinne Tuytelaars, Matthias Bethge

In a simple setting with direct supervision on the generative factors, we show how learning class-agnostic transformations offers a way to circumvent catastrophic forgetting and improve classification accuracy over time.

Classification Continual Learning +3

System Identification with Biophysical Constraints: A Circuit Model of the Inner Retina

1 code implementation NeurIPS 2020 Cornelius Schröder, David Klindt, Sarah Strauss, Katrin Franke, Matthias Bethge, Thomas Euler, Philipp Berens

Here, we present a computational model of temporal processing in the inner retina, including inhibitory feedback circuits and realistic synaptic release mechanisms.

Blocking

Guiding human gaze with convolutional neural networks

no code implementations18 Dec 2017 Leon A. Gatys, Matthias Kümmerer, Thomas S. A. Wallis, Matthias Bethge

Thus, manipulating fixation patterns to guide human attention is an exciting challenge in digital image processing.

Comment on "Biologically inspired protection of deep networks from adversarial attacks"

no code implementations5 Apr 2017 Wieland Brendel, Matthias Bethge

A recent paper suggests that Deep Neural Networks can be protected from gradient-based adversarial perturbations by driving the network activations into a highly saturated regime.

Texture Synthesis Using Shallow Convolutional Networks with Random Filters

no code implementations31 May 2016 Ivan Ustyuzhaninov, Wieland Brendel, Leon A. Gatys, Matthias Bethge

The current state of the art in parametric texture synthesis relies on the multi-layer feature space of deep CNNs that were trained on natural images.

Texture Synthesis

Generative Image Modeling Using Spatial LSTMs

no code implementations NeurIPS 2015 Lucas Theis, Matthias Bethge

Modeling the distribution of natural images is challenging, partly because of strong statistical dependencies which can extend over hundreds of pixels.

Ranked #59 on Image Generation on CIFAR-10 (bits/dimension metric)

Image Generation Texture Synthesis

A Generative Model of Natural Texture Surrogates

no code implementations28 May 2015 Niklas Ludtke, Debapriya Das, Lucas Theis, Matthias Bethge

In order to model this variability, we first applied the parametric texture algorithm of Portilla and Simoncelli to image patches of 64X64 pixels in a large database of natural images such that each image patch is then described by 655 texture parameters which specify certain statistics, such as variances and covariances of wavelet coefficients or coefficient magnitudes within that patch.

Image Compression

Supervised learning sets benchmark for robust spike detection from calcium imaging signals

no code implementations28 Feb 2015 Lucas Theis, Philipp Berens, Emmanouil Froudarakis, Jacob Reimer, Miroslav Román Rosón, Tom Baden, Thomas Euler, Andreas Tolias, Matthias Bethge

A fundamental challenge in calcium imaging has been to infer the timing of action potentials from the measured noisy calcium fluorescence traces.

How close are we to understanding image-based saliency?

no code implementations26 Sep 2014 Matthias Kümmerer, Thomas Wallis, Matthias Bethge

Within the set of the many complex factors driving gaze placement, the properities of an image that are associated with fixations under free viewing conditions have been studied extensively.

Point Processes

Excessive Invariance Causes Adversarial Vulnerability

no code implementations ICLR 2019 Jörn-Henrik Jacobsen, Jens Behrmann, Richard Zemel, Matthias Bethge

Despite their impressive performance, deep neural networks exhibit striking failures on out-of-distribution inputs.

Training sparse natural image models with a fast Gibbs sampler of an extended state space

no code implementations NeurIPS 2012 Lucas Theis, Jascha Sohl-Dickstein, Matthias Bethge

We present a new learning strategy based on an efficient blocked Gibbs sampler for sparse overcomplete linear models.

Evaluating neuronal codes for inference using Fisher information

no code implementations NeurIPS 2010 Haefner Ralf, Matthias Bethge

We characterize the response distribution for the binocular energy model in response to random dot stereograms and find it to be very different from the Poisson-like noise usually assumed.

Neurometric function analysis of population codes

no code implementations NeurIPS 2009 Philipp Berens, Sebastian Gerwinn, Alexander Ecker, Matthias Bethge

In this way, we provide a new rigorous framework for assessing the functional consequences of noise correlation structures for the representational accuracy of neural population codes that is in particular applicable to short-time population coding.

A joint maximum-entropy model for binary neural population patterns and continuous signals

no code implementations NeurIPS 2009 Sebastian Gerwinn, Philipp Berens, Matthias Bethge

Second-order maximum-entropy models have recently gained much interest for describing the statistics of binary spike trains.

Bayesian estimation of orientation preference maps

no code implementations NeurIPS 2009 Sebastian Gerwinn, Leonard White, Matthias Kaschube, Matthias Bethge, Jakob H. Macke

Imaging techniques such as optical imaging of intrinsic signals, 2-photon calcium imaging and voltage sensitive dye imaging can be used to measure the functional organization of visual cortex across different spatial scales.

Gaussian Processes

Hierarchical Modeling of Local Image Features through L_p-Nested Symmetric Distributions

no code implementations NeurIPS 2009 Matthias Bethge, Eero P. Simoncelli, Fabian H. Sinz

We introduce a new family of distributions, called $L_p${\em -nested symmetric distributions}, whose densities access the data exclusively through a hierarchical cascade of $L_p$-norms.

The Conjoint Effect of Divisive Normalization and Orientation Selectivity on Redundancy Reduction

no code implementations NeurIPS 2008 Fabian H. Sinz, Matthias Bethge

Bandpass filtering, orientation selectivity, and contrast gain control are prominent features of sensory coding at the level of V1 simple cells.

Receptive Fields without Spike-Triggering

no code implementations NeurIPS 2007 Guenther Zeck, Matthias Bethge, Jakob H. Macke

Can we find a concise description for the processing of a whole population of neurons analogous to the receptive field for single neurons?

Image Classification

Understanding Low- and High-Level Contributions to Fixation Prediction

no code implementations ICCV 2017 Matthias Kummerer, Thomas S. A. Wallis, Leon A. Gatys, Matthias Bethge

This model achieves better performance than all models not using features pre-trained on object recognition, making it a strong baseline to assess the utility of high-level features.

Object Recognition Saliency Prediction +1

Towards causal generative scene models via competition of experts

no code implementations27 Apr 2020 Julius von Kügelgen, Ivan Ustyuzhaninov, Peter Gehler, Matthias Bethge, Bernhard Schölkopf

Learning how to model complex scenes in a modular way with recombinable components is a pre-requisite for higher-order reasoning and acting in the physical world.

Inductive Bias Object

Rotation-invariant clustering of neuronal responses in primary visual cortex

no code implementations ICLR 2020 Ivan Ustyuzhaninov, Santiago A. Cadena, Emmanouil Froudarakis, Paul G. Fahey, Edgar Y. Walker, Erick Cobos, Jacob Reimer, Fabian H. Sinz, Andreas S. Tolias, Matthias Bethge, Alexander S. Ecker

Similar to a convolutional neural network (CNN), the mammalian retina encodes visual information into several dozen nonlinear feature maps, each formed by one ganglion cell type that tiles the visual space in an approximately shift-equivariant manner.

Clustering Open-Ended Question Answering

Exemplary natural images explain CNN activations better than synthetic feature visualizations

no code implementations ICLR 2021 Judy Borowski, Roland Simon Zimmermann, Judith Schepers, Robert Geirhos, Thomas S. A. Wallis, Matthias Bethge, Wieland Brendel

Using a well-controlled psychophysical paradigm, we compare the informativeness of synthetic images \citep{olah2017feature} with a simple baseline visualization, namely exemplary natural images that also strongly activate a specific feature map.

Informativeness

A Broad Dataset is All You Need for One-Shot Object Detection

no code implementations9 Nov 2020 Claudio Michaelis, Matthias Bethge, Alexander S. Ecker

We here show that this generalization gap can be nearly closed by increasing the number of object categories used during training.

Few-Shot Learning Metric Learning +3

State-of-the-Art in Human Scanpath Prediction

no code implementations24 Feb 2021 Matthias Kümmerer, Matthias Bethge

The last years have seen a surge in models predicting the scanpaths of fixations made by humans when viewing images.

Benchmarking Scanpath prediction

The Geometry of Adversarial Subspaces

no code implementations29 Sep 2021 Dylan M. Paiton, David Schultheiss, Matthias Kuemmerer, Zac Cranko, Matthias Bethge

We undertake analysis to characterize the geometry of the boundary, which is more curved within the adversarial subspace than within a random subspace of equal dimensionality.

How well do deep neural networks trained on object recognition characterize the mouse visual system?

no code implementations NeurIPS Workshop Neuro_AI 2019 Santiago A. Cadena, Fabian H. Sinz, Taliah Muhammad, Emmanouil Froudarakis, Erick Cobos, Edgar Y. Walker, Jake Reimer, Matthias Bethge, Andreas Tolias, Alexander S. Ecker

Recent work on modeling neural responses in the primate visual system has benefited from deep neural networks trained on large-scale object recognition, and found a hierarchical correspondence between layers of the artificial neural network and brain areas along the ventral visual stream.

Object Recognition

Natural Images are More Informative for Interpreting CNN Activations than State-of-the-Art Synthetic Feature Visualizations

no code implementations NeurIPS Workshop SVRHM 2020 Judy Borowski, Roland Simon Zimmermann, Judith Schepers, Robert Geirhos, Thomas S. A. Wallis, Matthias Bethge, Wieland Brendel

Using a well-controlled psychophysical paradigm, we compare the informativeness of synthetic images by Olah et al. [45] with a simple baseline visualization, namely natural images that also strongly activate a specific feature map.

Informativeness

Disentanglement and Generalization Under Correlation Shifts

no code implementations29 Dec 2021 Christina M. Funke, Paul Vicol, Kuan-Chieh Wang, Matthias Kümmerer, Richard Zemel, Matthias Bethge

Exploiting such correlations may increase predictive performance on noisy data; however, often correlations are not robust (e. g., they may change between domains, datasets, or applications) and models that exploit them do not generalize when correlations shift.

Attribute Disentanglement

Playing repeated games with Large Language Models

no code implementations26 May 2023 Elif Akata, Lion Schulz, Julian Coda-Forno, Seong Joon Oh, Matthias Bethge, Eric Schulz

In a large set of two players-two strategies games, we find that LLMs are particularly good at games where valuing their own self-interest pays off, like the iterated Prisoner's Dilemma family.

Provable Compositional Generalization for Object-Centric Learning

no code implementations9 Oct 2023 Thaddäus Wiedemer, Jack Brady, Alexander Panfilov, Attila Juhos, Matthias Bethge, Wieland Brendel

Learning representations that generalize to novel compositions of known concepts is crucial for bridging the gap between human and machine perception.

Object

Investigating Continual Pretraining in Large Language Models: Insights and Implications

no code implementations27 Feb 2024 Çağatay Yıldız, Nishaanth Kanna Ravichandran, Prishruit Punia, Matthias Bethge, Beyza Ermis

This paper studies the evolving domain of Continual Learning (CL) in large language models (LLMs), with a focus on developing strategies for efficient and sustainable training.

Continual Learning Continual Pretraining +3

Cannot find the paper you are looking for? You can Submit a new open access paper.