A case for robust translation tolerance in humans and CNNs. A commentary on Han et al

10 Dec 2020  ·  Ryan Blything, Valerio Biscione, Jeffrey Bowers ·

Han et al. (2020) reported a behavioral experiment that assessed the extent to which the human visual system can identify novel images at unseen retinal locations (what the authors call "intrinsic translation invariance") and developed a novel convolutional neural network model (an Eccentricity Dependent Network or ENN) to capture key aspects of the behavioral results. Here we show that their analysis of behavioral data used inappropriate baseline conditions, leading them to underestimate intrinsic translation invariance. When the data are correctly interpreted they show near complete translation tolerance extending to 14{\deg} in some conditions, consistent with earlier work (Bowers et al., 2016) and more recent work Blything et al. (in press). We describe a simpler model that provides a better account of translation invariance.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here