no code implementations • 14 Feb 2024 • Lukas S. Huber, Fred W. Mast, Felix A. Wichmann
Recent research has seen many behavioral comparisons between humans and deep neural networks (DNNs) in the domain of image classification.
no code implementations • 8 Dec 2023 • Felix A. Wichmann, Simon Kornblith, Robert Geirhos
Neither the hype exemplified in some exaggerated claims about deep neural networks (DNNs), nor the gloom expressed by Bowers et al. do DNNs as models in vision science justice: DNNs rapidly evolve, and today's limitations are often tomorrow's successes.
no code implementations • 26 May 2023 • Felix A. Wichmann, Robert Geirhos
Deep neural networks (DNNs) are machine learning algorithms that have revolutionised computer vision due to their remarkable successes in tasks like object classification and segmentation.
1 code implementation • 20 May 2022 • Lukas S. Huber, Robert Geirhos, Felix A. Wichmann
Unlike adults', whose object recognition performance is robust against a wide range of image distortions, DNNs trained on standard ImageNet (1. 3M images) perform poorly on distorted images.
1 code implementation • 12 Oct 2021 • Kristof Meding, Luca M. Schulze Buschoff, Robert Geirhos, Felix A. Wichmann
We find that the ImageNet validation set, among others, suffers from dichotomous data difficulty (DDD): For the range of investigated models and their accuracies, it is dominated by 46. 0% "trivial" and 11. 5% "impossible" images (beyond label errors).
no code implementations • NeurIPS Workshop SVRHM 2021 • Lukas Sebastian Huber, Robert Geirhos, Felix A. Wichmann
Recent gains in model robustness towards out-of-distribution images are predominantly achieved through ever-increasing large-scale datasets.
no code implementations • ICLR 2022 • Kristof Meding, Luca M. Schulze Buschoff, Robert Geirhos, Felix A. Wichmann
We find that the ImageNet validation set, among others, suffers from dichotomous data difficulty (DDD): For the range of investigated models and their accuracies, it is dominated by 46. 0% ``trivial'' and 11. 5% ``impossible'' images (beyond label errors).
no code implementations • NeurIPS Workshop ImageNet_PPF 2021 • Kristof Meding, Luca M. Schulze Buschoff, Robert Geirhos, Felix A. Wichmann
We find that the ImageNet validation set suffers from dichotomous data difficulty (DDD): For the range of investigated models and their accuracies, it is dominated by 46. 3% "trivial" and 11. 3% "impossible" images.
1 code implementation • NeurIPS 2021 • Robert Geirhos, Kantharaju Narayanappa, Benjamin Mitzkus, Tizian Thieringer, Matthias Bethge, Felix A. Wichmann, Wieland Brendel
The longstanding distortion robustness gap between humans and CNNs is closing, with the best models now exceeding human feedforward performance on most of the investigated OOD datasets.
1 code implementation • 28 Dec 2020 • Alban Flachot, Arash Akbarinia, Heiko H. Schütt, Roland W. Fleming, Felix A. Wichmann, Karl R. Gegenfurtner
High levels of color constancy were achieved with different DNN architectures.
no code implementations • 11 Dec 2020 • Jose Juan Esteve-Taboada, Guillermo Aguilar, Marianne Maertens, Felix A. Wichmann, Jesus Malo
Moreover, it suggests that the use of external noise in the experiments may be helpful as an extra reference.
no code implementations • NeurIPS Workshop SVRHM 2020 • Robert Geirhos, Kantharaju Narayanappa, Benjamin Mitzkus, Matthias Bethge, Felix A. Wichmann, Wieland Brendel
In the light of this recent breakthrough, we here compare self-supervised networks to supervised models and human behaviour.
1 code implementation • NeurIPS 2020 • Robert Geirhos, Kristof Meding, Felix A. Wichmann
Here we introduce trial-by-trial error consistency, a quantitative analysis for measuring whether two decision making systems systematically make errors on the same inputs.
2 code implementations • 16 Apr 2020 • Robert Geirhos, Jörn-Henrik Jacobsen, Claudio Michaelis, Richard Zemel, Wieland Brendel, Matthias Bethge, Felix A. Wichmann
Deep learning has triggered the current rise of artificial intelligence and is the workhorse of today's machine intelligence.
no code implementations • NeurIPS 2019 • Kristof Meding, Dominik Janzing, Bernhard Schölkopf, Felix A. Wichmann
We employ a so-called frozen noise paradigm enabling us to compare human performance with four different algorithms on a trial-by-trial basis: A causal inference algorithm exploiting the dependence structure of additive noise terms, a neurally inspired network, a Bayesian ideal observer model as well as a simple heuristic.
7 code implementations • ICLR 2019 • Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A. Wichmann, Wieland Brendel
Convolutional Neural Networks (CNNs) are commonly thought to recognise objects by learning increasingly complex representations of object shapes.
Ranked #1 on Out-of-Distribution Generalization on ImageNet-W
2 code implementations • NeurIPS 2018 • Robert Geirhos, Carlos R. Medina Temme, Jonas Rauber, Heiko H. Schütt, Matthias Bethge, Felix A. Wichmann
We compare the robustness of humans and current convolutional deep neural networks (DNNs) on object recognition under twelve different types of image degradations.
1 code implementation • 21 Jun 2017 • Robert Geirhos, David H. J. Janssen, Heiko H. Schütt, Jonas Rauber, Matthias Bethge, Felix A. Wichmann
In addition, we find progressively diverging classification error-patterns between humans and DNNs when the signal gets weaker, indicating that there may still be marked differences in the way humans and current DNNs perform visual object recognition.