no code implementations • ECCV 2020 • Matthias Tangemann, Matthias Kümmerer, Thomas S. A. Wallis, Matthias Bethge
Where people look when watching videos is believed to be heavily influenced by temporal patterns.
1 code implementation • 9 Apr 2025 • Alexander Rubinstein, Ameya Prabhu, Matthias Bethge, Seong Joon Oh
However, with recent sample-efficient segmentation models, we can separate objects in the pixel space and encode them independently.
no code implementations • 7 Mar 2025 • Lukas Thede, Karsten Roth, Matthias Bethge, Zeynep Akata, Tom Hartvigsen
Keeping large language models factually up-to-date is crucial for deployment, yet costly retraining remains a challenge.
no code implementations • 26 Feb 2025 • Christoph Schuhmann, Gollam Rabby, Ameya Prabhu, Tawsif Ahmed, Andreas Hochlehnert, Huu Nguyen, Nick Akinci, Ludwig Schmidt, Robert Kaczmarczyk, Sören Auer, Jenia Jitsev, Matthias Bethge
Paywalls, licenses and copyright rules often restrict the broad dissemination and reuse of scientific knowledge.
1 code implementation • 26 Feb 2025 • Shiven Sinha, Shashwat Goel, Ponnurangam Kumaraguru, Jonas Geiping, Matthias Bethge, Ameya Prabhu
There is growing excitement about the potential of Language Models (LMs) to accelerate scientific discovery.
no code implementations • 21 Feb 2025 • Luca M. Schulze Buschoff, Konstantinos Voudouris, Elif Akata, Matthias Bethge, Joshua B. Tenenbaum, Eric Schulz
In an effort to improve visual cognition and align models with human behavior, we introduce visual stimuli and human judgments on visual cognition tasks, allowing us to systematically evaluate performance across cognitive domains under a consistent environment.
no code implementations • 17 Feb 2025 • Thaddäus Wiedemer, Yash Sharma, Ameya Prabhu, Matthias Bethge, Wieland Brendel
We show that CLIP's performance on these samples can be accurately predicted from the pretraining frequencies of individual objects.
no code implementations • 17 Feb 2025 • Prasanna Mayilvahanan, Thaddäus Wiedemer, Sayak Mallick, Matthias Bethge, Wieland Brendel
Scaling laws guide the development of large language models (LLMs) by offering estimates for the optimal balance of model size, tokens, and compute.
2 code implementations • 6 Feb 2025 • Shashwat Goel, Joschka Struber, Ilze Amanda Auzina, Karuna K Chandra, Ponnurangam Kumaraguru, Douwe Kiela, Ameya Prabhu, Matthias Bethge, Jonas Geiping
We study how model similarity affects both aspects of AI oversight by proposing a probabilistic metric for LM similarity based on overlap in model mistakes.
1 code implementation • 9 Dec 2024 • Sebastian Dziadzio, Vishaal Udandarao, Karsten Roth, Ameya Prabhu, Zeynep Akata, Samuel Albanie, Matthias Bethge
In reality, new tasks and domains emerge progressively over time, requiring strategies to integrate the knowledge of expert models as they become available: a process we call temporal model merging.
no code implementations • 9 Dec 2024 • Adhiraj Ghosh, Sebastian Dziadzio, Ameya Prabhu, Vishaal Udandarao, Samuel Albanie, Matthias Bethge
Overall, we present a technique for open-ended evaluation, which can aggregate over incomplete, heterogeneous sample-level measurements to continually grow a benchmark alongside the rapidly developing foundation models.
1 code implementation • 3 Nov 2024 • Matthias Tangemann, Matthias Kümmerer, Matthias Bethge
In this work, we seek to better understand the computational basis for this capability by evaluating a broad range of optical flow models and a neuroscience inspired motion energy model for zero-shot figure-ground segmentation of random dot stimuli.
1 code implementation • 26 Oct 2024 • Marcel Binz, Elif Akata, Matthias Bethge, Franziska Brändle, Fred Callaway, Julian Coda-Forno, Peter Dayan, Can Demircan, Maria K. Eckstein, Noémi Éltető, Thomas L. Griffiths, Susanne Haridi, Akshay K. Jagadish, Li Ji-An, Alexander Kipnis, Sreejan Kumar, Tobias Ludwig, Marvin Mathony, Marcelo Mattar, Alireza Modirshanechi, Surabhi S. Nath, Joshua C. Peterson, Milena Rmus, Evan M. Russek, Tankred Saanum, Natalia Scharfenberg, Johannes A. Schubert, Luca M. Schulze Buschoff, Nishad Singhi, Xin Sui, Mirko Thalmann, Fabian Theis, Vuong Truong, Vishaal Udandarao, Konstantinos Voudouris, Robert Wilson, Kristin Witte, Shuchen Wu, Dirk Wulff, Huadong Xiong, Eric Schulz
Establishing a unified theory of cognition has been a major goal of psychology.
no code implementations • 10 Oct 2024 • Prasanna Mayilvahanan, Roland S. Zimmermann, Thaddäus Wiedemer, Evgenia Rusak, Attila Juhos, Matthias Bethge, Wieland Brendel
In the ImageNet era of computer vision, evaluation sets for measuring a model's OOD performance were designed to be strictly OOD with respect to style.
no code implementations • 8 Oct 2024 • Fırat Öncel, Matthias Bethge, Beyza Ermis, Mirco Ravanelli, Cem Subakan, Çağatay Yıldız
Our further token-level perplexity observations reveals that the perplexity degradation is due to a handful of tokens that are not informative about the domain.
1 code implementation • 26 Aug 2024 • Karsten Roth, Vishaal Udandarao, Sebastian Dziadzio, Ameya Prabhu, Mehdi Cherti, Oriol Vinyals, Olivier Hénaff, Samuel Albanie, Matthias Bethge, Zeynep Akata
In this work, we complement current perspectives on continual pretraining through a research test bed as well as provide comprehensive guidance for effective continual model updates in such scenarios.
1 code implementation • 10 Jul 2024 • Ori Press, Andreas Hochlehnert, Ameya Prabhu, Vishaal Udandarao, Ofir Press, Matthias Bethge
We pose the following research question: Given a text excerpt referencing a paper, could an LM act as a research assistant to correctly identify the referenced paper?
no code implementations • 13 Jun 2024 • Lukas Thede, Karsten Roth, Olivier J. Hénaff, Matthias Bethge, Zeynep Akata
(2) Indeed, we show how most often, P-RFCL techniques can be matched by a simple and lightweight PEFT baseline.
no code implementations • 5 Jun 2024 • Çağlar Hızlı, Çağatay Yıldız, Matthias Bethge, ST John, Pekka Marttinen
This work aims to improve generalization and interpretability of dynamical systems by recovering the underlying lower-dimensional latent states and their time evolutions.
1 code implementation • 8 May 2024 • Ori Press, Ravid Shwartz-Ziv, Yann Lecun, Matthias Bethge
After many steps of optimization, EM makes the model embed test images far away from the embeddings of training images, which results in a degradation of accuracy.
no code implementations • 9 Apr 2024 • Shiven Sinha, Ameya Prabhu, Ponnurangam Kumaraguru, Siddharth Bhat, Matthias Bethge
In this note, we revisit the IMO-AG-30 Challenge introduced with AlphaGeometry, and find that Wu's method is surprisingly strong.
1 code implementation • 4 Apr 2024 • Vishaal Udandarao, Ameya Prabhu, Adhiraj Ghosh, Yash Sharma, Philip H. S. Torr, Adel Bibi, Samuel Albanie, Matthias Bethge
Web-crawled pretraining datasets underlie the impressive "zero-shot" evaluation performance of multimodal models, such as CLIP for classification/retrieval and Stable-Diffusion for image generation.
1 code implementation • 29 Feb 2024 • Ameya Prabhu, Vishaal Udandarao, Philip Torr, Matthias Bethge, Adel Bibi, Samuel Albanie
To address this challenge, we introduce an efficient framework for model evaluation, Sort & Search (S&S)}, which reuses previously evaluated models by leveraging dynamic programming algorithms to selectively rank and sub-select test samples.
no code implementations • 27 Feb 2024 • Çağatay Yıldız, Nishaanth Kanna Ravichandran, Prishruit Punia, Matthias Bethge, Beyza Ermis
This paper studies the evolving domain of Continual Learning (CL) in large language models (LLMs), with a focus on developing strategies for efficient and sustainable training.
2 code implementations • 27 Dec 2023 • Sebastian Dziadzio, Çağatay Yıldız, Gido M. van de Ven, Tomasz Trzciński, Tinne Tuytelaars, Matthias Bethge
In a simple setting with direct supervision on the generative factors, we show how learning class-agnostic transformations offers a way to circumvent catastrophic forgetting and improve classification accuracy over time.
1 code implementation • 29 Nov 2023 • Max F. Burg, Thomas Zenkel, Michaela Vystrčilová, Jonathan Oesterle, Larissa Höfling, Konstantin F. Willeke, Jan Lause, Sarah Müller, Paul G. Fahey, Zhiwei Ding, Kelli Restivo, Shashwat Sridhar, Tim Gollisch, Philipp Berens, Andreas S. Tolias, Thomas Euler, Matthias Bethge, Alexander S. Ecker
Thus, for unbiased identification of the functional cell types in retina and visual cortex, new approaches are needed.
1 code implementation • 27 Nov 2023 • Luca M. Schulze Buschoff, Elif Akata, Matthias Bethge, Eric Schulz
A chief goal of artificial intelligence is to build machines that think like people.
no code implementations • 20 Nov 2023 • Eli Verwimp, Rahaf Aljundi, Shai Ben-David, Matthias Bethge, Andrea Cossu, Alexander Gepperth, Tyler L. Hayes, Eyke Hüllermeier, Christopher Kanan, Dhireesha Kudithipudi, Christoph H. Lampert, Martin Mundt, Razvan Pascanu, Adrian Popescu, Andreas S. Tolias, Joost Van de Weijer, Bing Liu, Vincenzo Lomonaco, Tinne Tuytelaars, Gido M. van de Ven
Continual learning is a subfield of machine learning, which aims to allow machine learning models to continuously learn on new data, by accumulating knowledge without forgetting what was learned in the past.
1 code implementation • 14 Oct 2023 • Prasanna Mayilvahanan, Thaddäus Wiedemer, Evgenia Rusak, Matthias Bethge, Wieland Brendel
Foundation models like CLIP are trained on hundreds of millions of samples and effortlessly generalize to new tasks and inputs.
1 code implementation • 12 Oct 2023 • Vishaal Udandarao, Max F. Burg, Samuel Albanie, Matthias Bethge
This finding points to a blind spot in current frontier VLMs: they excel in recognizing semantic content but fail to acquire an understanding of visual data-types through scaling.
1 code implementation • 9 Oct 2023 • Thaddäus Wiedemer, Jack Brady, Alexander Panfilov, Attila Juhos, Matthias Bethge, Wieland Brendel
Learning representations that generalize to novel compositions of known concepts is crucial for bridging the gap between human and machine perception.
1 code implementation • NeurIPS 2023 • Ori Press, Steffen Schneider, Matthias Kümmerer, Matthias Bethge
Test-Time Adaptation (TTA) allows to update pre-trained models to changing data distributions at deployment time.
no code implementations • 26 May 2023 • Elif Akata, Lion Schulz, Julian Coda-Forno, Seong Joon Oh, Matthias Bethge, Eric Schulz
In a large set of two players-two strategies games, we find that LLMs are particularly good at games where valuing their own self-interest pays off, like the iterated Prisoner's Dilemma family.
1 code implementation • NeurIPS 2023 • Ilze Amanda Auzina, Çağatay Yıldız, Sara Magliacane, Matthias Bethge, Efstratios Gavves
Neural ordinary differential equations (NODEs) have been proven useful for learning non-linear dynamics of arbitrary trajectories.
1 code implementation • 29 Dec 2021 • Christina M. Funke, Paul Vicol, Kuan-Chieh Wang, Matthias Kümmerer, Richard Zemel, Matthias Bethge
Exploiting such correlations may increase predictive performance on noisy data; however, often correlations are not robust (e. g., they may change between domains, datasets, or applications) and models that exploit them do not generalize when correlations shift.
1 code implementation • 13 Oct 2021 • Matthias Tangemann, Steffen Schneider, Julius von Kügelgen, Francesco Locatello, Peter Gehler, Thomas Brox, Matthias Kümmerer, Matthias Bethge, Bernhard Schölkopf
Learning generative object models from unlabelled videos is a long standing problem and required for causal scene modeling.
no code implementations • 29 Sep 2021 • Dylan M. Paiton, David Schultheiss, Matthias Kuemmerer, Zac Cranko, Matthias Bethge
We undertake analysis to characterize the geometry of the boundary, which is more curved within the adversarial subspace than within a random subspace of equal dimensionality.
1 code implementation • ICLR 2022 • Lukas Schott, Julius von Kügelgen, Frederik Träuble, Peter Gehler, Chris Russell, Matthias Bethge, Bernhard Schölkopf, Francesco Locatello, Wieland Brendel
An important component for generalization in machine learning is to uncover underlying latent factors of variation as well as the mechanism through which each factor acts in the world.
1 code implementation • NeurIPS 2021 • Roland S. Zimmermann, Judy Borowski, Robert Geirhos, Matthias Bethge, Thomas S. A. Wallis, Wieland Brendel
A precise understanding of why units in an artificial network respond to certain stimuli would constitute a big step towards explainable artificial intelligence.
1 code implementation • NeurIPS 2021 • Robert Geirhos, Kantharaju Narayanappa, Benjamin Mitzkus, Tizian Thieringer, Matthias Bethge, Felix A. Wichmann, Wieland Brendel
The longstanding distortion robustness gap between humans and CNNs is closing, with the best models now exceeding human feedforward performance on most of the investigated OOD datasets.
2 code implementations • ICCV 2021 • Akis Linardos, Matthias Kümmerer, Ori Press, Matthias Bethge
Since 2014 transfer learning has become the key driver for the improvement of spatial saliency prediction; however, with stagnant progress in the last 3-5 years.
1 code implementation • 27 Apr 2021 • Evgenia Rusak, Steffen Schneider, George Pachitariu, Luisa Eck, Peter Gehler, Oliver Bringmann, Wieland Brendel, Matthias Bethge
We demonstrate that self-learning techniques like entropy minimization and pseudo-labeling are simple and effective at improving performance of a deployed computer vision model under systematic domain shifts.
Ranked #1 on
Unsupervised Domain Adaptation
on ImageNet-A
(using extra training data)
no code implementations • ICLR Workshop SSL-RL 2021 • Khushdeep Singh Mann, Steffen Schneider, Alberto Chiappa, Jin Hwa Lee, Matthias Bethge, Alexander Mathis, Mackenzie W Mathis
We investigate the behavior of reinforcement learning (RL) agents under morphological distribution shifts.
Out-of-Distribution Generalization
reinforcement-learning
+1
no code implementations • 24 Feb 2021 • Matthias Kümmerer, Matthias Bethge
The last years have seen a surge in models predicting the scanpaths of fixations made by humans when viewing images.
1 code implementation • 17 Feb 2021 • Roland S. Zimmermann, Yash Sharma, Steffen Schneider, Matthias Bethge, Wieland Brendel
Contrastive learning has recently seen tremendous success in self-supervised learning.
Ranked #1 on
Disentanglement
on KITTI-Masks
no code implementations • ICLR 2021 • Judy Borowski, Roland Simon Zimmermann, Judith Schepers, Robert Geirhos, Thomas S. A. Wallis, Matthias Bethge, Wieland Brendel
Using a well-controlled psychophysical paradigm, we compare the informativeness of synthetic images \citep{olah2017feature} with a simple baseline visualization, namely exemplary natural images that also strongly activate a specific feature map.
1 code implementation • NeurIPS 2020 • Cornelius Schröder, David Klindt, Sarah Strauss, Katrin Franke, Matthias Bethge, Thomas Euler, Philipp Berens
Here, we present a computational model of temporal processing in the inner retina, including inhibitory feedback circuits and realistic synaptic release mechanisms.
no code implementations • 9 Nov 2020 • Claudio Michaelis, Matthias Bethge, Alexander S. Ecker
We here show that this generalization gap can be nearly closed by increasing the number of object categories used during training.
1 code implementation • 23 Oct 2020 • Judy Borowski, Roland S. Zimmermann, Judith Schepers, Robert Geirhos, Thomas S. A. Wallis, Matthias Bethge, Wieland Brendel
Even if only a single reference image is given, synthetic images provide less information than natural images ($65\pm5\%$ vs. $73\pm4\%$).
no code implementations • NeurIPS Workshop SVRHM 2020 • Robert Geirhos, Kantharaju Narayanappa, Benjamin Mitzkus, Matthias Bethge, Felix A. Wichmann, Wieland Brendel
In the light of this recent breakthrough, we here compare self-supervised networks to supervised models and human behaviour.
no code implementations • NeurIPS Workshop SVRHM 2020 • Judy Borowski, Roland Simon Zimmermann, Judith Schepers, Robert Geirhos, Thomas S. A. Wallis, Matthias Bethge, Wieland Brendel
Using a well-controlled psychophysical paradigm, we compare the informativeness of synthetic images by Olah et al. [45] with a simple baseline visualization, namely natural images that also strongly activate a specific feature map.
1 code implementation • 10 Aug 2020 • Jonas Rauber, Matthias Bethge, Wieland Brendel
EagerPy is a Python framework that lets you write code that automatically works natively with PyTorch, TensorFlow, JAX, and NumPy.
1 code implementation • ICLR 2021 • David Klindt, Lukas Schott, Yash Sharma, Ivan Ustyuzhaninov, Wieland Brendel, Matthias Bethge, Dylan Paiton
We construct an unsupervised learning model that achieves nonlinear disentanglement of underlying factors of variation in naturalistic videos.
Ranked #1 on
Disentanglement
on Natural Sprites
1 code implementation • ICML UDL 2020 • Alexander Mathis, Thomas Biasi, Mert Yuksekgonul, Byron Rogers, Matthias Bethge, Mackenzie Weygandt Mathis
Neural networks are highly effective tools for pose estimation.
1 code implementation • 15 Jul 2020 • Jonas Rauber, Matthias Bethge
When the rescaled perturbation $\eta \vec{\delta}$ is added to a starting point $\vec{x} \in D$ (where $D$ is the data domain, e. g. $D = [0, 1]^n$), the resulting vector $\vec{v} = \vec{x} + \eta \vec{\delta}$ will in general not be in $D$.
2 code implementations • NeurIPS 2020 • Steffen Schneider, Evgenia Rusak, Luisa Eck, Oliver Bringmann, Wieland Brendel, Matthias Bethge
With the more robust DeepAugment+AugMix model, we improve the state of the art achieved by a ResNet50 model up to date from 53. 6% mCE to 45. 4% mCE.
Ranked #4 on
Unsupervised Domain Adaptation
on ImageNet-R
1 code implementation • 12 Jun 2020 • Marissa A. Weis, Kashyap Chitta, Yash Sharma, Wieland Brendel, Matthias Bethge, Andreas Geiger, Alexander S. Ecker
Perceiving the world in terms of objects and tracking them through time is a crucial prerequisite for reasoning and scene understanding.
no code implementations • ICLR 2020 • Ivan Ustyuzhaninov, Santiago A. Cadena, Emmanouil Froudarakis, Paul G. Fahey, Edgar Y. Walker, Erick Cobos, Jacob Reimer, Fabian H. Sinz, Andreas S. Tolias, Matthias Bethge, Alexander S. Ecker
Similar to a convolutional neural network (CNN), the mammalian retina encodes visual information into several dozen nonlinear feature maps, each formed by one ganglion cell type that tiles the visual space in an approximately shift-equivariant manner.
no code implementations • 27 Apr 2020 • Julius von Kügelgen, Ivan Ustyuzhaninov, Peter Gehler, Matthias Bethge, Bernhard Schölkopf
Learning how to model complex scenes in a modular way with recombinable components is a pre-requisite for higher-order reasoning and acting in the physical world.
1 code implementation • 20 Apr 2020 • Christina M. Funke, Judy Borowski, Karolina Stosio, Wieland Brendel, Thomas S. A. Wallis, Matthias Bethge
In the second case study, we highlight the difference between necessary and sufficient mechanisms in visual reasoning tasks.
2 code implementations • 16 Apr 2020 • Robert Geirhos, Jörn-Henrik Jacobsen, Claudio Michaelis, Richard Zemel, Wieland Brendel, Matthias Bethge, Felix A. Wichmann
Deep learning has triggered the current rise of artificial intelligence and is the workhorse of today's machine intelligence.
3 code implementations • ECCV 2020 • Evgenia Rusak, Lukas Schott, Roland S. Zimmermann, Julian Bitterwolf, Oliver Bringmann, Matthias Bethge, Wieland Brendel
The human visual system is remarkably robust against a wide range of naturally occurring variations and corruptions like rain or snow.
1 code implementation • NeurIPS 2019 • Zhe Li, Wieland Brendel, Edgar Y. Walker, Erick Cobos, Taliah Muhammad, Jacob Reimer, Matthias Bethge, Fabian H. Sinz, Xaq Pitkow, Andreas S. Tolias
We propose to regularize CNNs using large-scale neuroscience data to learn more robust neural features in terms of representational similarity.
1 code implementation • 24 Sep 2019 • Alexander Mathis, Thomas Biasi, Steffen Schneider, Mert Yüksekgönül, Byron Rogers, Matthias Bethge, Mackenzie W. Mathis
Neural networks are highly effective tools for pose estimation.
Ranked #1 on
Animal Pose Estimation
on Horse-10
no code implementations • NeurIPS Workshop Neuro_AI 2019 • Santiago A. Cadena, Fabian H. Sinz, Taliah Muhammad, Emmanouil Froudarakis, Erick Cobos, Edgar Y. Walker, Jake Reimer, Matthias Bethge, Andreas Tolias, Alexander S. Ecker
Recent work on modeling neural responses in the primate visual system has benefited from deep neural networks trained on large-scale object recognition, and found a hierarchical correspondence between layers of the artificial neural network and brain areas along the ventral visual stream.
4 code implementations • 17 Jul 2019 • Claudio Michaelis, Benjamin Mitzkus, Robert Geirhos, Evgenia Rusak, Oliver Bringmann, Alexander S. Ecker, Matthias Bethge, Wieland Brendel
The ability to detect objects regardless of image distortions or weather conditions is crucial for real-world applications of deep learning like autonomous driving.
Ranked #1 on
Robust Object Detection
on MS COCO
1 code implementation • NeurIPS 2019 • Wieland Brendel, Jonas Rauber, Matthias Kümmerer, Ivan Ustyuzhaninov, Matthias Bethge
We here develop a new set of gradient-based adversarial attacks which (a) are more reliable in the face of gradient-masking than other gradient-based attacks, (b) perform better and are more query efficient than current state-of-the-art gradient-based attacks, (c) can be flexibly adapted to a wide range of adversarial criteria and (d) require virtually no hyperparameter tuning.
4 code implementations • ICLR 2019 • Wieland Brendel, Matthias Bethge
Deep Neural Networks (DNNs) excel on many complex perceptual tasks but it has proven notoriously difficult to understand how they reach their decisions.
7 code implementations • ICLR 2019 • Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A. Wichmann, Wieland Brendel
Convolutional Neural Networks (CNNs) are commonly thought to recognise objects by learning increasingly complex representations of object shapes.
Ranked #1 on
Out-of-Distribution Generalization
on ImageNet-W
3 code implementations • 28 Nov 2018 • Claudio Michaelis, Ivan Ustyuzhaninov, Matthias Bethge, Alexander S. Ecker
We demonstrate empirical results on MS Coco highlighting challenges of the one-shot setting: while transferring knowledge about instance segmentation to novel object categories works very well, targeting the detection network towards the reference category appears to be more difficult.
Ranked #1 on
One-Shot Instance Segmentation
on MS COCO
no code implementations • ICLR 2019 • Jörn-Henrik Jacobsen, Jens Behrmann, Richard Zemel, Matthias Bethge
Despite their impressive performance, deep neural networks exhibit striking failures on out-of-distribution inputs.
1 code implementation • ICLR 2019 • Alexander S. Ecker, Fabian H. Sinz, Emmanouil Froudarakis, Paul G. Fahey, Santiago A. Cadena, Edgar Y. Walker, Erick Cobos, Jacob Reimer, Andreas S. Tolias, Matthias Bethge
We present a framework to identify common features independent of individual neurons' orientation selectivity by using a rotation-equivariant convolutional neural network, which automatically extracts every feature at multiple different orientations.
2 code implementations • NeurIPS 2018 • Robert Geirhos, Carlos R. Medina Temme, Jonas Rauber, Heiko H. Schütt, Matthias Bethge, Felix A. Wichmann
We compare the robustness of humans and current convolutional deep neural networks (DNNs) on object recognition under twelve different types of image degradations.
2 code implementations • 6 Aug 2018 • Wieland Brendel, Jonas Rauber, Alexey Kurakin, Nicolas Papernot, Behar Veliqi, Marcel Salathé, Sharada P. Mohanty, Matthias Bethge
The NIPS 2018 Adversarial Vision Challenge is a competition to facilitate measurable progress towards robust machine vision models and more generally applicable adversarial attacks.
1 code implementation • ECCV 2018 • Santiago A. Cadena, Marissa A. Weis, Leon A. Gatys, Matthias Bethge, Alexander S. Ecker
Here we propose a method to discover invariances in the responses of hidden layer units of deep neural networks.
4 code implementations • 7 Jul 2018 • Ivan Ustyuzhaninov, Claudio Michaelis, Wieland Brendel, Matthias Bethge
We introduce one-shot texture segmentation: the task of segmenting an input image containing multiple textures given a patch of a reference texture.
3 code implementations • ICLR 2019 • Lukas Schott, Jonas Rauber, Matthias Bethge, Wieland Brendel
Despite much effort, deep neural networks remain highly susceptible to tiny input perturbations and even for MNIST, one of the most common toy datasets in computer vision, no neural network model exists for which adversarial perturbations are large and make semantic sense to humans.
1 code implementation • 9 Apr 2018 • Alexander Mathis, Pranav Mamidanna, Taiga Abe, Kevin M. Cury, Venkatesh N. Murthy, Mackenzie W. Mathis, Matthias Bethge
Quantifying behavior is crucial for many applications in neuroscience.
1 code implementation • ICML 2018 • Claudio Michaelis, Matthias Bethge, Alexander S. Ecker
We tackle the problem of one-shot segmentation: finding and segmenting a previously unseen object in a cluttered scene based on a single instruction example.
Ranked #1 on
One-Shot Segmentation
on Cluttered Omniglot
1 code implementation • 23 Mar 2018 • Alexander Böttcher, Wieland Brendel, Bernhard Englitz, Matthias Bethge
An important preprocessing step in most data analysis pipelines aims to extract a small set of sources that explain most of the data.
no code implementations • 18 Dec 2017 • Leon A. Gatys, Matthias Kümmerer, Thomas S. A. Wallis, Matthias Bethge
Thus, manipulating fixation patterns to guide human attention is an exciting challenge in digital image processing.
6 code implementations • ICLR 2018 • Wieland Brendel, Jonas Rauber, Matthias Bethge
Such decision-based attacks are (1) applicable to real-world black-box models such as autonomous cars, (2) need less knowledge and are easier to apply than transfer-based attacks and (3) are more robust to simple defences than gradient- or score-based attacks.
1 code implementation • NeurIPS 2017 • David Klindt, Alexander S. Ecker, Thomas Euler, Matthias Bethge
Traditional methods for neural system identification do not capitalize on this separation of “what” and “where”.
no code implementations • NeurIPS 2017 • David A. Klindt, Alexander S. Ecker, Thomas Euler, Matthias Bethge
Our network scales well to thousands of neurons and short recordings and can be trained end-to-end.
no code implementations • ICCV 2017 • Matthias Kummerer, Thomas S. A. Wallis, Leon A. Gatys, Matthias Bethge
This model achieves better performance than all models not using features pre-trained on object recognition, making it a strong baseline to assess the utility of high-level features.
7 code implementations • 13 Jul 2017 • Jonas Rauber, Wieland Brendel, Matthias Bethge
Foolbox is a new Python package to generate such adversarial perturbations and to quantify and compare the robustness of machine learning models.
1 code implementation • 21 Jun 2017 • Robert Geirhos, David H. J. Janssen, Heiko H. Schütt, Jonas Rauber, Matthias Bethge, Felix A. Wichmann
In addition, we find progressively diverging classification error-patterns between humans and DNNs when the signal gets weaker, indicating that there may still be marked differences in the way humans and current DNNs perform visual object recognition.
no code implementations • ECCV 2018 • Matthias Kümmerer, Thomas S. A. Wallis, Matthias Bethge
Here we show that no single saliency map can perform well under all metrics.
no code implementations • 5 Apr 2017 • Wieland Brendel, Matthias Bethge
A recent paper suggests that Deep Neural Networks can be protected from gradient-based adversarial perturbations by driving the network activations into a highly saturated regime.
no code implementations • 22 Feb 2017 • Christina M. Funke, Leon A. Gatys, Alexander S. Ecker, Matthias Bethge
Here we present a parametric model for dynamic textures.
6 code implementations • CVPR 2017 • Leon A. Gatys, Alexander S. Ecker, Matthias Bethge, Aaron Hertzmann, Eli Shechtman
Neural Style Transfer has shown very exciting results enabling new forms of image manipulation.
no code implementations • 5 Oct 2016 • Matthias Kümmerer, Thomas S. A. Wallis, Matthias Bethge
Here we present DeepGaze II, a model that predicts where people look in images.
7 code implementations • 19 Jun 2016 • Leon A. Gatys, Matthias Bethge, Aaron Hertzmann, Eli Shechtman
This note presents an extension to the neural artistic style transfer algorithm (Gatys et al.).
2 code implementations • CVPR 2016 • Leon A. Gatys, Alexander S. Ecker, Matthias Bethge
Rendering the semantic content of an image in different styles is a difficult image processing task.
no code implementations • 31 May 2016 • Ivan Ustyuzhaninov, Wieland Brendel, Leon A. Gatys, Matthias Bethge
The current state of the art in parametric texture synthesis relies on the multi-layer feature space of deep CNNs that were trained on natural images.
1 code implementation • 29 Feb 2016 • Marcel Nonnenmacher, Christian Behrens, Philipp Berens, Matthias Bethge, Jakob H. Macke
Support for this notion has come from a series of studies which identified statistical signatures of criticality in the ensemble activity of retinal ganglion cells.
Neurons and Cognition
1 code implementation • 5 Nov 2015 • Lucas Theis, Aäron van den Oord, Matthias Bethge
In particular, we show that three of the currently most commonly used criteria---average log-likelihood, Parzen window estimates, and visual fidelity of samples---are largely independent of each other when the data is high-dimensional.
283 code implementations • 26 Aug 2015 • Leon A. Gatys, Alexander S. Ecker, Matthias Bethge
In fine art, especially painting, humans have mastered the skill to create unique visual experiences through composing a complex interplay between the content and style of an image.
Ranked #1 on
Language Modelling
on A1
(using extra training data)
no code implementations • NeurIPS 2015 • Lucas Theis, Matthias Bethge
Modeling the distribution of natural images is challenging, partly because of strong statistical dependencies which can extend over hundreds of pixels.
no code implementations • 28 May 2015 • Niklas Ludtke, Debapriya Das, Lucas Theis, Matthias Bethge
In order to model this variability, we first applied the parametric texture algorithm of Portilla and Simoncelli to image patches of 64X64 pixels in a large database of natural images such that each image patch is then described by 655 texture parameters which specify certain statistics, such as variances and covariances of wavelet coefficients or coefficient magnitudes within that patch.
16 code implementations • NeurIPS 2015 • Leon A. Gatys, Alexander S. Ecker, Matthias Bethge
Here we introduce a new model of natural textures based on the feature spaces of convolutional neural networks optimised for object recognition.
no code implementations • 28 Feb 2015 • Lucas Theis, Philipp Berens, Emmanouil Froudarakis, Jacob Reimer, Miroslav Román Rosón, Tom Baden, Thomas Euler, Andreas Tolias, Matthias Bethge
A fundamental challenge in calcium imaging has been to infer the timing of action potentials from the measured noisy calcium fluorescence traces.
1 code implementation • 4 Nov 2014 • Matthias Kümmerer, Lucas Theis, Matthias Bethge
Recent results suggest that state-of-the-art saliency models perform far from optimal in predicting fixations.
no code implementations • 17 Oct 2014 • Reshad Hosseini, Suvrit Sra, Lucas Theis, Matthias Bethge
We study modeling and inference with the Elliptical Gamma Distribution (EGD).
no code implementations • 26 Sep 2014 • Matthias Kümmerer, Thomas Wallis, Matthias Bethge
Within the set of the many complex factors driving gaze placement, the properities of an image that are associated with fixations under free viewing conditions have been studied extensively.
no code implementations • NeurIPS 2012 • Lucas Theis, Jascha Sohl-Dickstein, Matthias Bethge
We present a new learning strategy based on an efficient blocked Gibbs sampler for sparse overcomplete linear models.
no code implementations • NeurIPS 2010 • Haefner Ralf, Matthias Bethge
We characterize the response distribution for the binocular energy model in response to random dot stereograms and find it to be very different from the Poisson-like noise usually assumed.
no code implementations • NeurIPS 2009 • Philipp Berens, Sebastian Gerwinn, Alexander Ecker, Matthias Bethge
In this way, we provide a new rigorous framework for assessing the functional consequences of noise correlation structures for the representational accuracy of neural population codes that is in particular applicable to short-time population coding.
no code implementations • NeurIPS 2009 • Sebastian Gerwinn, Leonard White, Matthias Kaschube, Matthias Bethge, Jakob H. Macke
Imaging techniques such as optical imaging of intrinsic signals, 2-photon calcium imaging and voltage sensitive dye imaging can be used to measure the functional organization of visual cortex across different spatial scales.
no code implementations • NeurIPS 2009 • Sebastian Gerwinn, Philipp Berens, Matthias Bethge
Second-order maximum-entropy models have recently gained much interest for describing the statistics of binary spike trains.
no code implementations • NeurIPS 2009 • Matthias Bethge, Eero P. Simoncelli, Fabian H. Sinz
We introduce a new family of distributions, called $L_p${\em -nested symmetric distributions}, whose densities access the data exclusively through a hierarchical cascade of $L_p$-norms.
no code implementations • NeurIPS 2008 • Fabian H. Sinz, Matthias Bethge
Bandpass filtering, orientation selectivity, and contrast gain control are prominent features of sensory coding at the level of V1 simple cells.
no code implementations • NeurIPS 2007 • Guenther Zeck, Matthias Bethge, Jakob H. Macke
Can we find a concise description for the processing of a whole population of neurons analogous to the receptive field for single neurons?
Ranked #106 on
Image Classification
on STL-10