Search Results for author: Felix A. Wichmann

Found 18 papers, 9 papers with code

Immediate generalisation in humans but a generalisation lag in deep neural networks -- evidence for representational divergence?

no code implementations14 Feb 2024 Lukas S. Huber, Fred W. Mast, Felix A. Wichmann

Recent research has seen many behavioral comparisons between humans and deep neural networks (DNNs) in the domain of image classification.

Image Classification

Neither hype nor gloom do DNNs justice

no code implementations8 Dec 2023 Felix A. Wichmann, Simon Kornblith, Robert Geirhos

Neither the hype exemplified in some exaggerated claims about deep neural networks (DNNs), nor the gloom expressed by Bowers et al. do DNNs as models in vision science justice: DNNs rapidly evolve, and today's limitations are often tomorrow's successes.

Are Deep Neural Networks Adequate Behavioural Models of Human Visual Perception?

no code implementations26 May 2023 Felix A. Wichmann, Robert Geirhos

Deep neural networks (DNNs) are machine learning algorithms that have revolutionised computer vision due to their remarkable successes in tasks like object classification and segmentation.

Object Object Recognition

The developmental trajectory of object recognition robustness: children are like small adults but unlike big deep neural networks

1 code implementation20 May 2022 Lukas S. Huber, Robert Geirhos, Felix A. Wichmann

Unlike adults', whose object recognition performance is robust against a wide range of image distortions, DNNs trained on standard ImageNet (1. 3M images) perform poorly on distorted images.

Object Object Recognition +1

Trivial or impossible -- dichotomous data difficulty masks model differences (on ImageNet and beyond)

1 code implementation12 Oct 2021 Kristof Meding, Luca M. Schulze Buschoff, Robert Geirhos, Felix A. Wichmann

We find that the ImageNet validation set, among others, suffers from dichotomous data difficulty (DDD): For the range of investigated models and their accuracies, it is dominated by 46. 0% "trivial" and 11. 5% "impossible" images (beyond label errors).

Inductive Bias

Trivial or Impossible --- dichotomous data difficulty masks model differences (on ImageNet and beyond)

no code implementations ICLR 2022 Kristof Meding, Luca M. Schulze Buschoff, Robert Geirhos, Felix A. Wichmann

We find that the ImageNet validation set, among others, suffers from dichotomous data difficulty (DDD): For the range of investigated models and their accuracies, it is dominated by 46. 0% ``trivial'' and 11. 5% ``impossible'' images (beyond label errors).

Inductive Bias

ImageNet suffers from dichotomous data difficulty

no code implementations NeurIPS Workshop ImageNet_PPF 2021 Kristof Meding, Luca M. Schulze Buschoff, Robert Geirhos, Felix A. Wichmann

We find that the ImageNet validation set suffers from dichotomous data difficulty (DDD): For the range of investigated models and their accuracies, it is dominated by 46. 3% "trivial" and 11. 3% "impossible" images.

Inductive Bias

Partial success in closing the gap between human and machine vision

1 code implementation NeurIPS 2021 Robert Geirhos, Kantharaju Narayanappa, Benjamin Mitzkus, Tizian Thieringer, Matthias Bethge, Felix A. Wichmann, Wieland Brendel

The longstanding distortion robustness gap between humans and CNNs is closing, with the best models now exceeding human feedforward performance on most of the investigated OOD datasets.

Image Classification

Psychophysical Estimation of Early and Late Noise

no code implementations11 Dec 2020 Jose Juan Esteve-Taboada, Guillermo Aguilar, Marianne Maertens, Felix A. Wichmann, Jesus Malo

Moreover, it suggests that the use of external noise in the experiments may be helpful as an extra reference.

Beyond accuracy: quantifying trial-by-trial behaviour of CNNs and humans by measuring error consistency

1 code implementation NeurIPS 2020 Robert Geirhos, Kristof Meding, Felix A. Wichmann

Here we introduce trial-by-trial error consistency, a quantitative analysis for measuring whether two decision making systems systematically make errors on the same inputs.

Decision Making Object Recognition

Shortcut Learning in Deep Neural Networks

2 code implementations16 Apr 2020 Robert Geirhos, Jörn-Henrik Jacobsen, Claudio Michaelis, Richard Zemel, Wieland Brendel, Matthias Bethge, Felix A. Wichmann

Deep learning has triggered the current rise of artificial intelligence and is the workhorse of today's machine intelligence.

Benchmarking

Perceiving the arrow of time in autoregressive motion

no code implementations NeurIPS 2019 Kristof Meding, Dominik Janzing, Bernhard Schölkopf, Felix A. Wichmann

We employ a so-called frozen noise paradigm enabling us to compare human performance with four different algorithms on a trial-by-trial basis: A causal inference algorithm exploiting the dependence structure of additive noise terms, a neurally inspired network, a Bayesian ideal observer model as well as a simple heuristic.

Causal Inference Time Series Analysis

Generalisation in humans and deep neural networks

2 code implementations NeurIPS 2018 Robert Geirhos, Carlos R. Medina Temme, Jonas Rauber, Heiko H. Schütt, Matthias Bethge, Felix A. Wichmann

We compare the robustness of humans and current convolutional deep neural networks (DNNs) on object recognition under twelve different types of image degradations.

Object Recognition

Comparing deep neural networks against humans: object recognition when the signal gets weaker

1 code implementation21 Jun 2017 Robert Geirhos, David H. J. Janssen, Heiko H. Schütt, Jonas Rauber, Matthias Bethge, Felix A. Wichmann

In addition, we find progressively diverging classification error-patterns between humans and DNNs when the signal gets weaker, indicating that there may still be marked differences in the way humans and current DNNs perform visual object recognition.

General Classification Object +1

Cannot find the paper you are looking for? You can Submit a new open access paper.