Search Results for author: Wieland Brendel

Found 46 papers, 33 papers with code

Effective pruning of web-scale datasets based on complexity of concept clusters

1 code implementation9 Jan 2024 Amro Abbas, Evgenia Rusak, Kushal Tirumala, Wieland Brendel, Kamalika Chaudhuri, Ari S. Morcos

Using a simple and intuitive complexity measure, we are able to reduce the training cost to a quarter of regular training.

Provable Compositional Generalization for Object-Centric Learning

no code implementations9 Oct 2023 Thaddäus Wiedemer, Jack Brady, Alexander Panfilov, Attila Juhos, Matthias Bethge, Wieland Brendel

Learning representations that generalize to novel compositions of known concepts is crucial for bridging the gap between human and machine perception.

Object

Scale Alone Does not Improve Mechanistic Interpretability in Vision Models

no code implementations NeurIPS 2023 Roland S. Zimmermann, Thomas Klein, Wieland Brendel

We use a psychophysical paradigm to quantify one form of mechanistic interpretability for a diverse suite of nine models and find no scaling effect for interpretability - neither for model nor dataset size.

Don't trust your eyes: on the (un)reliability of feature visualizations

1 code implementation7 Jun 2023 Robert Geirhos, Roland S. Zimmermann, Blair Bilodeau, Wieland Brendel, Been Kim

Today, visualization methods form the foundation of our knowledge about the internal workings of neural networks, as a type of mechanistic interpretability.

Provably Learning Object-Centric Representations

no code implementations23 May 2023 Jack Brady, Roland S. Zimmermann, Yash Sharma, Bernhard Schölkopf, Julius von Kügelgen, Wieland Brendel

Under this generative process, we prove that the ground-truth object representations can be identified by an invertible and compositional inference model, even in the presence of dependencies between objects.

Object Representation Learning

Increasing Confidence in Adversarial Robustness Evaluations

no code implementations28 Jun 2022 Roland S. Zimmermann, Wieland Brendel, Florian Tramer, Nicholas Carlini

Hundreds of defenses have been proposed to make deep neural networks robust against minimal (adversarial) input perturbations.

Adversarial Robustness

Visual Representation Learning Does Not Generalize Strongly Within the Same Domain

1 code implementation ICLR 2022 Lukas Schott, Julius von Kügelgen, Frederik Träuble, Peter Gehler, Chris Russell, Matthias Bethge, Bernhard Schölkopf, Francesco Locatello, Wieland Brendel

An important component for generalization in machine learning is to uncover underlying latent factors of variation as well as the mechanism through which each factor acts in the world.

Representation Learning

How Well do Feature Visualizations Support Causal Understanding of CNN Activations?

1 code implementation NeurIPS 2021 Roland S. Zimmermann, Judy Borowski, Robert Geirhos, Matthias Bethge, Thomas S. A. Wallis, Wieland Brendel

A precise understanding of why units in an artificial network respond to certain stimuli would constitute a big step towards explainable artificial intelligence.

Explainable artificial intelligence

Partial success in closing the gap between human and machine vision

1 code implementation NeurIPS 2021 Robert Geirhos, Kantharaju Narayanappa, Benjamin Mitzkus, Tizian Thieringer, Matthias Bethge, Felix A. Wichmann, Wieland Brendel

The longstanding distortion robustness gap between humans and CNNs is closing, with the best models now exceeding human feedforward performance on most of the investigated OOD datasets.

Image Classification

If your data distribution shifts, use self-learning

1 code implementation27 Apr 2021 Evgenia Rusak, Steffen Schneider, George Pachitariu, Luisa Eck, Peter Gehler, Oliver Bringmann, Wieland Brendel, Matthias Bethge

We demonstrate that self-learning techniques like entropy minimization and pseudo-labeling are simple and effective at improving performance of a deployed computer vision model under systematic domain shifts.

 Ranked #1 on Unsupervised Domain Adaptation on ImageNet-A (using extra training data)

Robust classification Self-Learning +1

Exemplary natural images explain CNN activations better than synthetic feature visualizations

no code implementations ICLR 2021 Judy Borowski, Roland Simon Zimmermann, Judith Schepers, Robert Geirhos, Thomas S. A. Wallis, Matthias Bethge, Wieland Brendel

Using a well-controlled psychophysical paradigm, we compare the informativeness of synthetic images \citep{olah2017feature} with a simple baseline visualization, namely exemplary natural images that also strongly activate a specific feature map.

Informativeness

Natural Images are More Informative for Interpreting CNN Activations than State-of-the-Art Synthetic Feature Visualizations

no code implementations NeurIPS Workshop SVRHM 2020 Judy Borowski, Roland Simon Zimmermann, Judith Schepers, Robert Geirhos, Thomas S. A. Wallis, Matthias Bethge, Wieland Brendel

Using a well-controlled psychophysical paradigm, we compare the informativeness of synthetic images by Olah et al. [45] with a simple baseline visualization, namely natural images that also strongly activate a specific feature map.

Informativeness

EagerPy: Writing Code That Works Natively with PyTorch, TensorFlow, JAX, and NumPy

1 code implementation10 Aug 2020 Jonas Rauber, Matthias Bethge, Wieland Brendel

EagerPy is a Python framework that lets you write code that automatically works natively with PyTorch, TensorFlow, JAX, and NumPy.

Local Convolutions Cause an Implicit Bias towards High Frequency Adversarial Examples

no code implementations19 Jun 2020 Josue Ortega Caro, Yilong Ju, Ryan Pyle, Sourav Dey, Wieland Brendel, Fabio Anselmi, Ankit Patel

Inspired by theoretical work on linear full-width convolutional models, we hypothesize that the local (i. e. bounded-width) convolutional operations commonly used in current neural networks are implicitly biased to learn high frequency features, and that this is one of the root causes of high frequency adversarial examples.

Adversarial Robustness Vocal Bursts Intensity Prediction

Benchmarking Unsupervised Object Representations for Video Sequences

1 code implementation12 Jun 2020 Marissa A. Weis, Kashyap Chitta, Yash Sharma, Wieland Brendel, Matthias Bethge, Andreas Geiger, Alexander S. Ecker

Perceiving the world in terms of objects and tracking them through time is a crucial prerequisite for reasoning and scene understanding.

Benchmarking Clustering +5

Shortcut Learning in Deep Neural Networks

2 code implementations16 Apr 2020 Robert Geirhos, Jörn-Henrik Jacobsen, Claudio Michaelis, Richard Zemel, Wieland Brendel, Matthias Bethge, Felix A. Wichmann

Deep learning has triggered the current rise of artificial intelligence and is the workhorse of today's machine intelligence.

Benchmarking

On Adaptive Attacks to Adversarial Example Defenses

4 code implementations NeurIPS 2020 Florian Tramer, Nicholas Carlini, Wieland Brendel, Aleksander Madry

Adaptive attacks have (rightfully) become the de facto standard for evaluating defenses to adversarial examples.

A simple way to make neural networks robust against diverse image corruptions

3 code implementations ECCV 2020 Evgenia Rusak, Lukas Schott, Roland S. Zimmermann, Julian Bitterwolf, Oliver Bringmann, Matthias Bethge, Wieland Brendel

The human visual system is remarkably robust against a wide range of naturally occurring variations and corruptions like rain or snow.

Benchmarking Robustness in Object Detection: Autonomous Driving when Winter is Coming

4 code implementations17 Jul 2019 Claudio Michaelis, Benjamin Mitzkus, Robert Geirhos, Evgenia Rusak, Oliver Bringmann, Alexander S. Ecker, Matthias Bethge, Wieland Brendel

The ability to detect objects regardless of image distortions or weather conditions is crucial for real-world applications of deep learning like autonomous driving.

Autonomous Driving Benchmarking +5

Accurate, reliable and fast robustness evaluation

1 code implementation NeurIPS 2019 Wieland Brendel, Jonas Rauber, Matthias Kümmerer, Ivan Ustyuzhaninov, Matthias Bethge

We here develop a new set of gradient-based adversarial attacks which (a) are more reliable in the face of gradient-masking than other gradient-based attacks, (b) perform better and are more query efficient than current state-of-the-art gradient-based attacks, (c) can be flexibly adapted to a wide range of adversarial criteria and (d) require virtually no hyperparameter tuning.

Approximating CNNs with Bag-of-local-Features models works surprisingly well on ImageNet

4 code implementations ICLR 2019 Wieland Brendel, Matthias Bethge

Deep Neural Networks (DNNs) excel on many complex perceptual tasks but it has proven notoriously difficult to understand how they reach their decisions.

Adversarial Vision Challenge

2 code implementations6 Aug 2018 Wieland Brendel, Jonas Rauber, Alexey Kurakin, Nicolas Papernot, Behar Veliqi, Marcel Salathé, Sharada P. Mohanty, Matthias Bethge

The NIPS 2018 Adversarial Vision Challenge is a competition to facilitate measurable progress towards robust machine vision models and more generally applicable adversarial attacks.

One-shot Texture Segmentation

4 code implementations7 Jul 2018 Ivan Ustyuzhaninov, Claudio Michaelis, Wieland Brendel, Matthias Bethge

We introduce one-shot texture segmentation: the task of segmenting an input image containing multiple textures given a patch of a reference texture.

Segmentation

Towards the first adversarially robust neural network model on MNIST

3 code implementations ICLR 2019 Lukas Schott, Jonas Rauber, Matthias Bethge, Wieland Brendel

Despite much effort, deep neural networks remain highly susceptible to tiny input perturbations and even for MNIST, one of the most common toy datasets in computer vision, no neural network model exists for which adversarial perturbations are large and make semantic sense to humans.

Adversarial Robustness Binarization +1

Trace your sources in large-scale data: one ring to find them all

1 code implementation23 Mar 2018 Alexander Böttcher, Wieland Brendel, Bernhard Englitz, Matthias Bethge

An important preprocessing step in most data analysis pipelines aims to extract a small set of sources that explain most of the data.

blind source separation

Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models

6 code implementations ICLR 2018 Wieland Brendel, Jonas Rauber, Matthias Bethge

Such decision-based attacks are (1) applicable to real-world black-box models such as autonomous cars, (2) need less knowledge and are easier to apply than transfer-based attacks and (3) are more robust to simple defences than gradient- or score-based attacks.

BIG-bench Machine Learning

Foolbox: A Python toolbox to benchmark the robustness of machine learning models

6 code implementations13 Jul 2017 Jonas Rauber, Wieland Brendel, Matthias Bethge

Foolbox is a new Python package to generate such adversarial perturbations and to quantify and compare the robustness of machine learning models.

Adversarial Attack BIG-bench Machine Learning

Comment on "Biologically inspired protection of deep networks from adversarial attacks"

no code implementations5 Apr 2017 Wieland Brendel, Matthias Bethge

A recent paper suggests that Deep Neural Networks can be protected from gradient-based adversarial perturbations by driving the network activations into a highly saturated regime.

Texture Synthesis Using Shallow Convolutional Networks with Random Filters

no code implementations31 May 2016 Ivan Ustyuzhaninov, Wieland Brendel, Leon A. Gatys, Matthias Bethge

The current state of the art in parametric texture synthesis relies on the multi-layer feature space of deep CNNs that were trained on natural images.

Texture Synthesis

Unsupervised learning of an efficient short-term memory network

no code implementations NeurIPS 2014 Pietro Vertechi, Wieland Brendel, Christian K. Machens

Specifically, we show how these networks can learn to efficiently represent their present and past inputs, based on local learning rules only.

Demixed Principal Component Analysis

no code implementations NeurIPS 2011 Wieland Brendel, Ranulfo Romo, Christian K. Machens

Standard dimensionality reduction techniques such as principal component analysis (PCA) can provide a succinct and complete description of the data, but the description is constructed independent of the relevant task variables and is often hard to interpret.

Dimensionality Reduction

Cannot find the paper you are looking for? You can Submit a new open access paper.