no code implementations • 9 Feb 2022 • Jacob Russin, Maryam Zolfaghar, Seongmin A. Park, Erie Boorman, Randall C. O'Reilly
Here, we build on previous work and show that neural networks equipped with a mechanism for cognitive control do not exhibit catastrophic forgetting when trials are blocked.
1 code implementation • 18 Nov 2021 • Kevin L. McKee, Ian C. Crandell, Rishidev Chaudhuri, Randall C. O'Reilly
The random failure of presynaptic vesicles to release neurotransmitters may allow the brain to sample from posterior distributions of network parameters, interpreted as epistemic uncertainty.
1 code implementation • 13 Aug 2021 • John Rohrlich, Randall C. O'Reilly
Infants, adults, non-human primates and non-primates all learn patterns implicitly, and they do so across modalities.
no code implementations • 7 Aug 2021 • Randall C. O'Reilly, Charan Ranganath, Jacob L. Russin
A hallmark of human intelligence is the ability to adapt to new situations, by applying learned rules to new content (systematicity) and thereby enabling an open-ended number of inferences and actions (generativity).
1 code implementation • 19 May 2021 • Jacob Russin, Maryam Zolfaghar, Seongmin A. Park, Erie Boorman, Randall C. O'Reilly
The neural mechanisms supporting flexible relational inferences, especially in novel situations, are a major focus of current research.
1 code implementation • ICML 2020 • Taylor W. Webb, Zachary Dulberg, Steven M. Frankland, Alexander A. Petrov, Randall C. O'Reilly, Jonathan D. Cohen
Extrapolation -- the ability to make inferences that go beyond the scope of one's experiences -- is a hallmark of human intelligence.
2 code implementations • 26 Jun 2020 • Randall C. O'Reilly, Jacob L. Russin, Maryam Zolfaghar, John Rohrlich
How do humans learn from raw sensory experience?
1 code implementation • 22 Apr 2019 • Jake Russin, Jason Jo, Randall C. O'Reilly, Yoshua Bengio
Standard methods in deep learning for natural language processing fail to capture the compositional structure of human language that allows for systematic generalization outside of the training distribution.