no code implementations • 14 Apr 2024 • Taylor Webb, Keith J. Holyoak, Hongjing Lu
We recently reported evidence that large language models are capable of solving a wide range of text-based analogy problems in a zero-shot manner, indicating the presence of an emergent capacity for analogical reasoning.
2 code implementations • 30 Sep 2023 • Taylor Webb, Shanka Subhra Mondal, Chi Wang, Brian Krabach, Ida Momennejad
To address this, we take inspiration from the human brain, in which planning is accomplished via the recurrent interaction of specialized modules in the prefrontal cortex (PFC).
1 code implementation • 28 May 2023 • Shanka Subhra Mondal, Steven Frankland, Taylor Webb, Jonathan D. Cohen
Deep neural networks have made tremendous gains in emulating human-like intelligence, and have been used increasingly as ways of understanding how the brain may solve the complex computational problems on which this relies.
1 code implementation • 1 Apr 2023 • Awni Altabaa, Taylor Webb, Jonathan Cohen, John Lafferty
An extension of Transformers is proposed that enables explicit relational reasoning through a novel module called the Abstractor.
1 code implementation • 3 Mar 2023 • Shanka Subhra Mondal, Taylor Webb, Jonathan D. Cohen
These results suggest that an inductive bias for object-centric processing may be a key component of abstract visual reasoning, obviating the need for problem-specific inductive biases.
2 code implementations • 19 Dec 2022 • Taylor Webb, Keith J. Holyoak, Hongjing Lu
In human cognition, this capacity is closely tied to an ability to reason by analogy.
1 code implementation • 21 May 2021 • Zack Dulberg, Taylor Webb, Jonathan Cohen
Learning to count is an important example of the broader human capacity for systematic generalization, and the development of counting is often characterized by an inflection point when children rapidly acquire proficiency with the procedures that support this ability.