Search Results for author: Taylor Webb

Found 7 papers, 6 papers with code

Evidence from counterfactual tasks supports emergent analogical reasoning in large language models

no code implementations14 Apr 2024 Taylor Webb, Keith J. Holyoak, Hongjing Lu

We recently reported evidence that large language models are capable of solving a wide range of text-based analogy problems in a zero-shot manner, indicating the presence of an emergent capacity for analogical reasoning.

counterfactual

A Prefrontal Cortex-inspired Architecture for Planning in Large Language Models

2 code implementations30 Sep 2023 Taylor Webb, Shanka Subhra Mondal, Chi Wang, Brian Krabach, Ida Momennejad

To address this, we take inspiration from the human brain, in which planning is accomplished via the recurrent interaction of specialized modules in the prefrontal cortex (PFC).

In-Context Learning

Determinantal Point Process Attention Over Grid Cell Code Supports Out of Distribution Generalization

1 code implementation28 May 2023 Shanka Subhra Mondal, Steven Frankland, Taylor Webb, Jonathan D. Cohen

Deep neural networks have made tremendous gains in emulating human-like intelligence, and have been used increasingly as ways of understanding how the brain may solve the complex computational problems on which this relies.

Out-of-Distribution Generalization

Abstractors and relational cross-attention: An inductive bias for explicit relational reasoning in Transformers

1 code implementation1 Apr 2023 Awni Altabaa, Taylor Webb, Jonathan Cohen, John Lafferty

An extension of Transformers is proposed that enables explicit relational reasoning through a novel module called the Abstractor.

Inductive Bias Relational Reasoning

Learning to reason over visual objects

1 code implementation3 Mar 2023 Shanka Subhra Mondal, Taylor Webb, Jonathan D. Cohen

These results suggest that an inductive bias for object-centric processing may be a key component of abstract visual reasoning, obviating the need for problem-specific inductive biases.

Inductive Bias Visual Reasoning

Modelling the development of counting with memory-augmented neural networks

1 code implementation21 May 2021 Zack Dulberg, Taylor Webb, Jonathan Cohen

Learning to count is an important example of the broader human capacity for systematic generalization, and the development of counting is often characterized by an inflection point when children rapidly acquire proficiency with the procedures that support this ability.

Systematic Generalization

Cannot find the paper you are looking for? You can Submit a new open access paper.