Search Results for author: Wieland Brendel

Found 38 papers, 28 papers with code

If your data distribution shifts, use self-learning

no code implementations29 Sep 2021 Evgenia Rusak, Steffen Schneider, George Pachitariu, Luisa Eck, Peter Vincent Gehler, Oliver Bringmann, Wieland Brendel, Matthias Bethge

In this paper, we demonstrate that self-learning techniques like entropy minimization or pseudo-labeling are simple, yet effective techniques for increasing test performance under domain shifts.

Self-Learning

Visual Representation Learning Does Not Generalize Strongly Within the Same Domain

1 code implementation ICLR 2022 Lukas Schott, Julius von Kügelgen, Frederik Träuble, Peter Gehler, Chris Russell, Matthias Bethge, Bernhard Schölkopf, Francesco Locatello, Wieland Brendel

An important component for generalization in machine learning is to uncover underlying latent factors of variation as well as the mechanism through which each factor acts in the world.

Representation Learning

How Well do Feature Visualizations Support Causal Understanding of CNN Activations?

1 code implementation NeurIPS 2021 Roland S. Zimmermann, Judy Borowski, Robert Geirhos, Matthias Bethge, Thomas S. A. Wallis, Wieland Brendel

A precise understanding of why units in an artificial network respond to certain stimuli would constitute a big step towards explainable artificial intelligence.

Explainable artificial intelligence

Partial success in closing the gap between human and machine vision

1 code implementation NeurIPS 2021 Robert Geirhos, Kantharaju Narayanappa, Benjamin Mitzkus, Tizian Thieringer, Matthias Bethge, Felix A. Wichmann, Wieland Brendel

The longstanding distortion robustness gap between humans and CNNs is closing, with the best models now exceeding human feedforward performance on most of the investigated OOD datasets.

Image Classification

Adapting ImageNet-scale models to complex distribution shifts with self-learning

1 code implementation27 Apr 2021 Evgenia Rusak, Steffen Schneider, Peter Gehler, Oliver Bringmann, Wieland Brendel, Matthias Bethge

We therefore re-purpose the dataset from the Visual Domain Adaptation Challenge 2019 and use a subset of it as a new robustness benchmark (ImageNet-D) which proves to be a more challenging dataset for all current state-of-the-art models (58. 2% error) to guide future research efforts at the intersection of robustness and domain adaptation on ImageNet scale.

 Ranked #1 on Unsupervised Domain Adaptation on ImageNet-C (using extra training data)

Robust classification Self-Learning +1

Exemplary natural images explain CNN activations better than synthetic feature visualizations

no code implementations ICLR 2021 Judy Borowski, Roland Simon Zimmermann, Judith Schepers, Robert Geirhos, Thomas S. A. Wallis, Matthias Bethge, Wieland Brendel

Using a well-controlled psychophysical paradigm, we compare the informativeness of synthetic images \citep{olah2017feature} with a simple baseline visualization, namely exemplary natural images that also strongly activate a specific feature map.

Informativeness

Natural Images are More Informative for Interpreting CNN Activations than State-of-the-Art Synthetic Feature Visualizations

no code implementations NeurIPS Workshop SVRHM 2020 Judy Borowski, Roland Simon Zimmermann, Judith Schepers, Robert Geirhos, Thomas S. A. Wallis, Matthias Bethge, Wieland Brendel

Using a well-controlled psychophysical paradigm, we compare the informativeness of synthetic images by Olah et al. [45] with a simple baseline visualization, namely natural images that also strongly activate a specific feature map.

Informativeness

EagerPy: Writing Code That Works Natively with PyTorch, TensorFlow, JAX, and NumPy

1 code implementation10 Aug 2020 Jonas Rauber, Matthias Bethge, Wieland Brendel

EagerPy is a Python framework that lets you write code that automatically works natively with PyTorch, TensorFlow, JAX, and NumPy.

Local Convolutions Cause an Implicit Bias towards High Frequency Adversarial Examples

no code implementations19 Jun 2020 Josue Ortega Caro, Yilong Ju, Ryan Pyle, Sourav Dey, Wieland Brendel, Fabio Anselmi, Ankit Patel

Inspired by theoretical work on linear full-width convolutional models, we hypothesize that the local (i. e. bounded-width) convolutional operations commonly used in current neural networks are implicitly biased to learn high frequency features, and that this is one of the root causes of high frequency adversarial examples.

Adversarial Robustness

Benchmarking Unsupervised Object Representations for Video Sequences

1 code implementation12 Jun 2020 Marissa A. Weis, Kashyap Chitta, Yash Sharma, Wieland Brendel, Matthias Bethge, Andreas Geiger, Alexander S. Ecker

Perceiving the world in terms of objects and tracking them through time is a crucial prerequisite for reasoning and scene understanding.

Multi-Object Tracking Object Detection +1

Shortcut Learning in Deep Neural Networks

2 code implementations16 Apr 2020 Robert Geirhos, Jörn-Henrik Jacobsen, Claudio Michaelis, Richard Zemel, Wieland Brendel, Matthias Bethge, Felix A. Wichmann

Deep learning has triggered the current rise of artificial intelligence and is the workhorse of today's machine intelligence.

On Adaptive Attacks to Adversarial Example Defenses

3 code implementations NeurIPS 2020 Florian Tramer, Nicholas Carlini, Wieland Brendel, Aleksander Madry

Adaptive attacks have (rightfully) become the de facto standard for evaluating defenses to adversarial examples.

A simple way to make neural networks robust against diverse image corruptions

3 code implementations ECCV 2020 Evgenia Rusak, Lukas Schott, Roland S. Zimmermann, Julian Bitterwolf, Oliver Bringmann, Matthias Bethge, Wieland Brendel

The human visual system is remarkably robust against a wide range of naturally occurring variations and corruptions like rain or snow.

Learning From Brains How to Regularize Machines

no code implementations NeurIPS 2019 Zhe Li, Wieland Brendel, Edgar Y. Walker, Erick Cobos, Taliah Muhammad, Jacob Reimer, Matthias Bethge, Fabian H. Sinz, Xaq Pitkow, Andreas S. Tolias

We propose to regularize CNNs using large-scale neuroscience data to learn more robust neural features in terms of representational similarity.

Image Classification

Accurate, reliable and fast robustness evaluation

1 code implementation NeurIPS 2019 Wieland Brendel, Jonas Rauber, Matthias Kümmerer, Ivan Ustyuzhaninov, Matthias Bethge

We here develop a new set of gradient-based adversarial attacks which (a) are more reliable in the face of gradient-masking than other gradient-based attacks, (b) perform better and are more query efficient than current state-of-the-art gradient-based attacks, (c) can be flexibly adapted to a wide range of adversarial criteria and (d) require virtually no hyperparameter tuning.

Approximating CNNs with Bag-of-local-Features models works surprisingly well on ImageNet

4 code implementations ICLR 2019 Wieland Brendel, Matthias Bethge

Deep Neural Networks (DNNs) excel on many complex perceptual tasks but it has proven notoriously difficult to understand how they reach their decisions.

Adversarial Vision Challenge

2 code implementations6 Aug 2018 Wieland Brendel, Jonas Rauber, Alexey Kurakin, Nicolas Papernot, Behar Veliqi, Marcel Salathé, Sharada P. Mohanty, Matthias Bethge

The NIPS 2018 Adversarial Vision Challenge is a competition to facilitate measurable progress towards robust machine vision models and more generally applicable adversarial attacks.

One-shot Texture Segmentation

4 code implementations7 Jul 2018 Ivan Ustyuzhaninov, Claudio Michaelis, Wieland Brendel, Matthias Bethge

We introduce one-shot texture segmentation: the task of segmenting an input image containing multiple textures given a patch of a reference texture.

Towards the first adversarially robust neural network model on MNIST

3 code implementations ICLR 2019 Lukas Schott, Jonas Rauber, Matthias Bethge, Wieland Brendel

Despite much effort, deep neural networks remain highly susceptible to tiny input perturbations and even for MNIST, one of the most common toy datasets in computer vision, no neural network model exists for which adversarial perturbations are large and make semantic sense to humans.

Adversarial Robustness Binarization +1

Trace your sources in large-scale data: one ring to find them all

1 code implementation23 Mar 2018 Alexander Böttcher, Wieland Brendel, Bernhard Englitz, Matthias Bethge

An important preprocessing step in most data analysis pipelines aims to extract a small set of sources that explain most of the data.

Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models

6 code implementations ICLR 2018 Wieland Brendel, Jonas Rauber, Matthias Bethge

Such decision-based attacks are (1) applicable to real-world black-box models such as autonomous cars, (2) need less knowledge and are easier to apply than transfer-based attacks and (3) are more robust to simple defences than gradient- or score-based attacks.

Foolbox: A Python toolbox to benchmark the robustness of machine learning models

6 code implementations13 Jul 2017 Jonas Rauber, Wieland Brendel, Matthias Bethge

Foolbox is a new Python package to generate such adversarial perturbations and to quantify and compare the robustness of machine learning models.

Adversarial Attack

Comment on "Biologically inspired protection of deep networks from adversarial attacks"

no code implementations5 Apr 2017 Wieland Brendel, Matthias Bethge

A recent paper suggests that Deep Neural Networks can be protected from gradient-based adversarial perturbations by driving the network activations into a highly saturated regime.

Texture Synthesis Using Shallow Convolutional Networks with Random Filters

no code implementations31 May 2016 Ivan Ustyuzhaninov, Wieland Brendel, Leon A. Gatys, Matthias Bethge

The current state of the art in parametric texture synthesis relies on the multi-layer feature space of deep CNNs that were trained on natural images.

Texture Synthesis

Unsupervised learning of an efficient short-term memory network

no code implementations NeurIPS 2014 Pietro Vertechi, Wieland Brendel, Christian K. Machens

Specifically, we show how these networks can learn to efficiently represent their present and past inputs, based on local learning rules only.

Demixed Principal Component Analysis

no code implementations NeurIPS 2011 Wieland Brendel, Ranulfo Romo, Christian K. Machens

Standard dimensionality reduction techniques such as principal component analysis (PCA) can provide a succinct and complete description of the data, but the description is constructed independent of the relevant task variables and is often hard to interpret.

Dimensionality Reduction

Cannot find the paper you are looking for? You can Submit a new open access paper.