Search Results for author: Paul Tupper

Found 4 papers, 1 papers with code

Generalizing Outside the Training Set: When Can Neural Networks Learn Identity Effects?

1 code implementation9 May 2020 Simone Brugiapaglia, Matthew Liu, Paul Tupper

Finally, we demonstrate our theory with computational experiments in which we explore the effect of different input encodings on the ability of algorithms to generalize to novel inputs.

Discrete symbolic optimization and Boltzmann sampling by continuous neural dynamics: Gradient Symbolic Computation

no code implementations4 Jan 2018 Paul Tupper, Paul Smolensky, Pyeong Whan Cho

Gradient Symbolic Computation is proposed as a means of solving discrete global optimization problems using a neurally plausible continuous stochastic dynamical system.

Stability and Fluctuations in a Simple Model of Phonetic Category Change

no code implementations20 Apr 2017 Benjamin Goodman, Paul Tupper

In spoken languages, speakers divide up the space of phonetic possibilities into different regions, corresponding to different phonemes.

Which Learning Algorithms Can Generalize Identity-Based Rules to Novel Inputs?

no code implementations12 May 2016 Paul Tupper, Bobak Shahriari

We propose a novel framework for the analysis of learning algorithms that allows us to say when such algorithms can and cannot generalize certain patterns from training data to test data.

Cannot find the paper you are looking for? You can Submit a new open access paper.