Search Results for author: Jeffrey Bowers

Found 7 papers, 1 papers with code

Convolutional Neural Networks Trained to Identify Words Provide a Surprisingly Good Account of Visual Form Priming Effects

no code implementations8 Feb 2023 Dong Yin, Valerio Biscione, Jeffrey Bowers

A wide variety of orthographic coding schemes and models of visual word identification have been developed to account for masked priming data that provide a measure of orthographic similarity between letter strings.

Object Recognition

The role of Disentanglement in Generalisation

1 code implementation ICLR 2021 Milton Llera Montero, Casimir JH Ludwig, Rui Ponte Costa, Gaurav Malhotra, Jeffrey Bowers

It is claimed that such representations should be able to capture the compositional structure of the world which can then be combined to produce novel representations.

Disentanglement Out-of-Distribution Generalization +1

A case for robust translation tolerance in humans and CNNs. A commentary on Han et al

no code implementations10 Dec 2020 Ryan Blything, Valerio Biscione, Jeffrey Bowers

Han et al. (2020) reported a behavioral experiment that assessed the extent to which the human visual system can identify novel images at unseen retinal locations (what the authors call "intrinsic translation invariance") and developed a novel convolutional neural network model (an Eccentricity Dependent Network or ENN) to capture key aspects of the behavioral results.

Translation

Priorless Recurrent Networks Learn Curiously

no code implementations COLING 2020 Jeff Mitchell, Jeffrey Bowers

Recently, domain-general recurrent neural networks, without explicit linguistic inductive biases, have been shown to successfully reproduce a range of human language behaviours, such as accurately predicting number agreement between nouns and verbs.

Language Acquisition Sentence

Learning Translation Invariance in CNNs

no code implementations NeurIPS Workshop SVRHM 2020 Valerio Biscione, Jeffrey Bowers

In this work we show how, even though CNNs are not 'architecturally invariant' to translation, they can indeed 'learn' to be invariant to translation.

Translation

What a difference a pixel makes: An empirical examination of features used by CNNs for categorisation

no code implementations ICLR 2019 Gaurav Malhotra, Jeffrey Bowers

Convolutional neural networks (CNNs) were inspired by human vision and, in some settings, achieve a performance comparable to human object recognition.

Object Recognition

Training neural networks to encode symbols enables combinatorial generalization

no code implementations29 Mar 2019 Ivan Vankov, Jeffrey Bowers

Combinatorial generalization - the ability to understand and produce novel combinations of already familiar elements - is considered to be a core capacity of the human mind and a major challenge to neural network models.

Cannot find the paper you are looking for? You can Submit a new open access paper.