1 code implementation • 12 Nov 2024 • Jack Brady, Julius von Kügelgen, Sébastien Lachapelle, Simon Buchholz, Thomas Kipf, Wieland Brendel
Using this formalism, we prove that interaction asymmetry enables both disentanglement and compositional generalization.
no code implementations • 29 Oct 2024 • Patrik Reizinger, Alice Bizeul, Attila Juhos, Julia E. Vogt, Randall Balestriero, Wieland Brendel, David Klindt
Second, we show that on DisLib, a widely-used disentanglement benchmark, simple classification tasks recover latent structures up to linear transformations.
no code implementations • 10 Oct 2024 • Prasanna Mayilvahanan, Roland S. Zimmermann, Thaddäus Wiedemer, Evgenia Rusak, Attila Juhos, Matthias Bethge, Wieland Brendel
In the ImageNet era of computer vision, evaluation sets for measuring a model's OOD performance were designed to be strictly OOD with respect to style.
1 code implementation • 9 Sep 2024 • Anna Mészáros, Szilvia Ujváry, Wieland Brendel, Patrik Reizinger, Ferenc Huszár
To better understand the OOD behaviour of autoregressive LLMs, we focus on formal languages, which are defined by the intersection of rules.
no code implementations • 28 Jun 2024 • Evgenia Rusak, Patrik Reizinger, Attila Juhos, Oliver Bringmann, Roland S. Zimmermann, Wieland Brendel
Hence, a more realistic assumption is that all latent factors change, with a continuum of variability across these factors.
no code implementations • 20 Jun 2024 • Patrik Reizinger, Siyuan Guo, Ferenc Huszár, Bernhard Schölkopf, Wieland Brendel
We provide a unified framework, termed Identifiable Exchangeable Mechanisms (IEM), for representation and structure learning under the lens of exchangeability.
1 code implementation • 3 May 2024 • Patrik Reizinger, Szilvia Ujváry, Anna Mészáros, Anna Kerekes, Wieland Brendel, Ferenc Huszár
The last decade has seen blossoming research in deep learning theory attempting to answer, "Why does deep learning generalize?"
1 code implementation • 9 Jan 2024 • Amro Abbas, Evgenia Rusak, Kushal Tirumala, Wieland Brendel, Kamalika Chaudhuri, Ari S. Morcos
Using a simple and intuitive complexity measure, we are able to reduce the training cost to a quarter of regular training.
2 code implementations • 29 Nov 2023 • Goutham Rajendran, Patrik Reizinger, Wieland Brendel, Pradeep Ravikumar
We investigate the relationship between system identification and intervention design in dynamical systems.
1 code implementation • 14 Oct 2023 • Prasanna Mayilvahanan, Thaddäus Wiedemer, Evgenia Rusak, Matthias Bethge, Wieland Brendel
Foundation models like CLIP are trained on hundreds of millions of samples and effortlessly generalize to new tasks and inputs.
1 code implementation • 9 Oct 2023 • Thaddäus Wiedemer, Jack Brady, Alexander Panfilov, Attila Juhos, Matthias Bethge, Wieland Brendel
Learning representations that generalize to novel compositions of known concepts is crucial for bridging the gap between human and machine perception.
no code implementations • NeurIPS 2023 • Roland S. Zimmermann, Thomas Klein, Wieland Brendel
We use a psychophysical paradigm to quantify one form of mechanistic interpretability for a diverse suite of nine models and find no scaling effect for interpretability - neither for model nor dataset size.
1 code implementation • 7 Jun 2023 • Robert Geirhos, Roland S. Zimmermann, Blair Bilodeau, Wieland Brendel, Been Kim
Today, visualization methods form the foundation of our knowledge about the internal workings of neural networks, as a type of mechanistic interpretability.
no code implementations • 23 May 2023 • Jack Brady, Roland S. Zimmermann, Yash Sharma, Bernhard Schölkopf, Julius von Kügelgen, Wieland Brendel
Under this generative process, we prove that the ground-truth object representations can be identified by an invertible and compositional inference model, even in the presence of dependencies between objects.
no code implementations • 28 Jun 2022 • Roland S. Zimmermann, Wieland Brendel, Florian Tramer, Nicholas Carlini
Hundreds of defenses have been proposed to make deep neural networks robust against minimal (adversarial) input perturbations.
1 code implementation • 6 Jun 2022 • Patrik Reizinger, Luigi Gresele, Jack Brady, Julius von Kügelgen, Dominik Zietlow, Bernhard Schölkopf, Georg Martius, Wieland Brendel, Michel Besserve
Leveraging self-consistency, we show that the ELBO converges to a regularized log-likelihood.
1 code implementation • ICLR 2022 • Lukas Schott, Julius von Kügelgen, Frederik Träuble, Peter Gehler, Chris Russell, Matthias Bethge, Bernhard Schölkopf, Francesco Locatello, Wieland Brendel
An important component for generalization in machine learning is to uncover underlying latent factors of variation as well as the mechanism through which each factor acts in the world.
1 code implementation • NeurIPS 2021 • Roland S. Zimmermann, Judy Borowski, Robert Geirhos, Matthias Bethge, Thomas S. A. Wallis, Wieland Brendel
A precise understanding of why units in an artificial network respond to certain stimuli would constitute a big step towards explainable artificial intelligence.
1 code implementation • NeurIPS 2021 • Robert Geirhos, Kantharaju Narayanappa, Benjamin Mitzkus, Tizian Thieringer, Matthias Bethge, Felix A. Wichmann, Wieland Brendel
The longstanding distortion robustness gap between humans and CNNs is closing, with the best models now exceeding human feedforward performance on most of the investigated OOD datasets.
1 code implementation • NeurIPS 2021 • Julius von Kügelgen, Yash Sharma, Luigi Gresele, Wieland Brendel, Bernhard Schölkopf, Michel Besserve, Francesco Locatello
A common practice is to perform data augmentation via hand-crafted transformations intended to leave the semantics of the data invariant.
Ranked #1 on
Image Classification
on Causal3DIdent
1 code implementation • 27 Apr 2021 • Evgenia Rusak, Steffen Schneider, George Pachitariu, Luisa Eck, Peter Gehler, Oliver Bringmann, Wieland Brendel, Matthias Bethge
We demonstrate that self-learning techniques like entropy minimization and pseudo-labeling are simple and effective at improving performance of a deployed computer vision model under systematic domain shifts.
Ranked #1 on
Unsupervised Domain Adaptation
on ImageNet-A
(using extra training data)
3 code implementations • NeurIPS 2021 • Maura Pintor, Fabio Roli, Wieland Brendel, Battista Biggio
Evaluating adversarial robustness amounts to finding the minimum perturbation needed to have an input sample misclassified.
1 code implementation • 17 Feb 2021 • Roland S. Zimmermann, Yash Sharma, Steffen Schneider, Matthias Bethge, Wieland Brendel
Contrastive learning has recently seen tremendous success in self-supervised learning.
Ranked #1 on
Disentanglement
on KITTI-Masks
no code implementations • ICLR 2021 • Judy Borowski, Roland Simon Zimmermann, Judith Schepers, Robert Geirhos, Thomas S. A. Wallis, Matthias Bethge, Wieland Brendel
Using a well-controlled psychophysical paradigm, we compare the informativeness of synthetic images \citep{olah2017feature} with a simple baseline visualization, namely exemplary natural images that also strongly activate a specific feature map.
1 code implementation • 23 Oct 2020 • Judy Borowski, Roland S. Zimmermann, Judith Schepers, Robert Geirhos, Thomas S. A. Wallis, Matthias Bethge, Wieland Brendel
Even if only a single reference image is given, synthetic images provide less information than natural images ($65\pm5\%$ vs. $73\pm4\%$).
no code implementations • NeurIPS Workshop SVRHM 2020 • Robert Geirhos, Kantharaju Narayanappa, Benjamin Mitzkus, Matthias Bethge, Felix A. Wichmann, Wieland Brendel
In the light of this recent breakthrough, we here compare self-supervised networks to supervised models and human behaviour.
no code implementations • NeurIPS Workshop SVRHM 2020 • Judy Borowski, Roland Simon Zimmermann, Judith Schepers, Robert Geirhos, Thomas S. A. Wallis, Matthias Bethge, Wieland Brendel
Using a well-controlled psychophysical paradigm, we compare the informativeness of synthetic images by Olah et al. [45] with a simple baseline visualization, namely natural images that also strongly activate a specific feature map.
1 code implementation • 10 Aug 2020 • Jonas Rauber, Matthias Bethge, Wieland Brendel
EagerPy is a Python framework that lets you write code that automatically works natively with PyTorch, TensorFlow, JAX, and NumPy.
1 code implementation • ICLR 2021 • David Klindt, Lukas Schott, Yash Sharma, Ivan Ustyuzhaninov, Wieland Brendel, Matthias Bethge, Dylan Paiton
We construct an unsupervised learning model that achieves nonlinear disentanglement of underlying factors of variation in naturalistic videos.
Ranked #1 on
Disentanglement
on Natural Sprites
2 code implementations • NeurIPS 2020 • Steffen Schneider, Evgenia Rusak, Luisa Eck, Oliver Bringmann, Wieland Brendel, Matthias Bethge
With the more robust DeepAugment+AugMix model, we improve the state of the art achieved by a ResNet50 model up to date from 53. 6% mCE to 45. 4% mCE.
Ranked #4 on
Unsupervised Domain Adaptation
on ImageNet-R
no code implementations • 19 Jun 2020 • Josue Ortega Caro, Yilong Ju, Ryan Pyle, Sourav Dey, Wieland Brendel, Fabio Anselmi, Ankit Patel
Inspired by theoretical work on linear full-width convolutional models, we hypothesize that the local (i. e. bounded-width) convolutional operations commonly used in current neural networks are implicitly biased to learn high frequency features, and that this is one of the root causes of high frequency adversarial examples.
1 code implementation • 12 Jun 2020 • Marissa A. Weis, Kashyap Chitta, Yash Sharma, Wieland Brendel, Matthias Bethge, Andreas Geiger, Alexander S. Ecker
Perceiving the world in terms of objects and tracking them through time is a crucial prerequisite for reasoning and scene understanding.
1 code implementation • 20 Apr 2020 • Christina M. Funke, Judy Borowski, Karolina Stosio, Wieland Brendel, Thomas S. A. Wallis, Matthias Bethge
In the second case study, we highlight the difference between necessary and sufficient mechanisms in visual reasoning tasks.
2 code implementations • 16 Apr 2020 • Robert Geirhos, Jörn-Henrik Jacobsen, Claudio Michaelis, Richard Zemel, Wieland Brendel, Matthias Bethge, Felix A. Wichmann
Deep learning has triggered the current rise of artificial intelligence and is the workhorse of today's machine intelligence.
4 code implementations • NeurIPS 2020 • Florian Tramer, Nicholas Carlini, Wieland Brendel, Aleksander Madry
Adaptive attacks have (rightfully) become the de facto standard for evaluating defenses to adversarial examples.
3 code implementations • ECCV 2020 • Evgenia Rusak, Lukas Schott, Roland S. Zimmermann, Julian Bitterwolf, Oliver Bringmann, Matthias Bethge, Wieland Brendel
The human visual system is remarkably robust against a wide range of naturally occurring variations and corruptions like rain or snow.
1 code implementation • NeurIPS 2019 • Zhe Li, Wieland Brendel, Edgar Y. Walker, Erick Cobos, Taliah Muhammad, Jacob Reimer, Matthias Bethge, Fabian H. Sinz, Xaq Pitkow, Andreas S. Tolias
We propose to regularize CNNs using large-scale neuroscience data to learn more robust neural features in terms of representational similarity.
4 code implementations • 17 Jul 2019 • Claudio Michaelis, Benjamin Mitzkus, Robert Geirhos, Evgenia Rusak, Oliver Bringmann, Alexander S. Ecker, Matthias Bethge, Wieland Brendel
The ability to detect objects regardless of image distortions or weather conditions is crucial for real-world applications of deep learning like autonomous driving.
Ranked #1 on
Robust Object Detection
on MS COCO
1 code implementation • NeurIPS 2019 • Wieland Brendel, Jonas Rauber, Matthias Kümmerer, Ivan Ustyuzhaninov, Matthias Bethge
We here develop a new set of gradient-based adversarial attacks which (a) are more reliable in the face of gradient-masking than other gradient-based attacks, (b) perform better and are more query efficient than current state-of-the-art gradient-based attacks, (c) can be flexibly adapted to a wide range of adversarial criteria and (d) require virtually no hyperparameter tuning.
4 code implementations • ICLR 2019 • Wieland Brendel, Matthias Bethge
Deep Neural Networks (DNNs) excel on many complex perceptual tasks but it has proven notoriously difficult to understand how they reach their decisions.
4 code implementations • 18 Feb 2019 • Nicholas Carlini, Anish Athalye, Nicolas Papernot, Wieland Brendel, Jonas Rauber, Dimitris Tsipras, Ian Goodfellow, Aleksander Madry, Alexey Kurakin
Correctly evaluating defenses against adversarial examples has proven to be extremely difficult.
7 code implementations • ICLR 2019 • Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A. Wichmann, Wieland Brendel
Convolutional Neural Networks (CNNs) are commonly thought to recognise objects by learning increasingly complex representations of object shapes.
Ranked #1 on
Out-of-Distribution Generalization
on ImageNet-W
2 code implementations • 6 Aug 2018 • Wieland Brendel, Jonas Rauber, Alexey Kurakin, Nicolas Papernot, Behar Veliqi, Marcel Salathé, Sharada P. Mohanty, Matthias Bethge
The NIPS 2018 Adversarial Vision Challenge is a competition to facilitate measurable progress towards robust machine vision models and more generally applicable adversarial attacks.
4 code implementations • 7 Jul 2018 • Ivan Ustyuzhaninov, Claudio Michaelis, Wieland Brendel, Matthias Bethge
We introduce one-shot texture segmentation: the task of segmenting an input image containing multiple textures given a patch of a reference texture.
3 code implementations • ICLR 2019 • Lukas Schott, Jonas Rauber, Matthias Bethge, Wieland Brendel
Despite much effort, deep neural networks remain highly susceptible to tiny input perturbations and even for MNIST, one of the most common toy datasets in computer vision, no neural network model exists for which adversarial perturbations are large and make semantic sense to humans.
1 code implementation • 23 Mar 2018 • Alexander Böttcher, Wieland Brendel, Bernhard Englitz, Matthias Bethge
An important preprocessing step in most data analysis pipelines aims to extract a small set of sources that explain most of the data.
6 code implementations • ICLR 2018 • Wieland Brendel, Jonas Rauber, Matthias Bethge
Such decision-based attacks are (1) applicable to real-world black-box models such as autonomous cars, (2) need less knowledge and are easier to apply than transfer-based attacks and (3) are more robust to simple defences than gradient- or score-based attacks.
7 code implementations • 13 Jul 2017 • Jonas Rauber, Wieland Brendel, Matthias Bethge
Foolbox is a new Python package to generate such adversarial perturbations and to quantify and compare the robustness of machine learning models.
no code implementations • 5 Apr 2017 • Wieland Brendel, Matthias Bethge
A recent paper suggests that Deep Neural Networks can be protected from gradient-based adversarial perturbations by driving the network activations into a highly saturated regime.
no code implementations • 31 May 2016 • Ivan Ustyuzhaninov, Wieland Brendel, Leon A. Gatys, Matthias Bethge
The current state of the art in parametric texture synthesis relies on the multi-layer feature space of deep CNNs that were trained on natural images.
no code implementations • NeurIPS 2014 • Pietro Vertechi, Wieland Brendel, Christian K. Machens
Specifically, we show how these networks can learn to efficiently represent their present and past inputs, based on local learning rules only.
2 code implementations • 22 Oct 2014 • Dmitry Kobak, Wieland Brendel, Christos Constantinidis, Claudia E. Feierstein, Adam Kepecs, Zachary F. Mainen, Ranulfo Romo, Xue-Lian Qi, Naoshige Uchida, Christian K. Machens
Neurons in higher cortical areas, such as the prefrontal cortex, are known to be tuned to a variety of sensory and motor variables.
no code implementations • NeurIPS 2011 • Wieland Brendel, Ranulfo Romo, Christian K. Machens
Standard dimensionality reduction techniques such as principal component analysis (PCA) can provide a succinct and complete description of the data, but the description is constructed independent of the relevant task variables and is often hard to interpret.