1 code implementation • 26 May 2024 • Awni Altabaa, John Lafferty
In this paper, we present an extension of Transformers where multi-head attention is augmented with two distinct types of attention heads, each routing information of a different type.
no code implementations • 1 Mar 2024 • Awni Altabaa, Zhuoran Yang
In a sequential decision-making problem, the information structure is the description of how events in the system occurring at different points in time affect each other.
no code implementations • 13 Feb 2024 • Awni Altabaa, John Lafferty
Inner products of neural network feature maps arise in a wide variety of machine learning frameworks as a method of modeling relations between inputs.
2 code implementations • 5 Oct 2023 • Awni Altabaa, John Lafferty
A maturing area of research in deep learning is the study of architectures and inductive biases for learning representations of relational features.
no code implementations • 12 Sep 2023 • Taylor W. Webb, Steven M. Frankland, Awni Altabaa, Simon Segert, Kamesh Krishnamurthy, Declan Campbell, Jacob Russin, Tyler Giallanza, Zack Dulberg, Randall O'Reilly, John Lafferty, Jonathan D. Cohen
A central challenge for cognitive science is to explain how abstract concepts are acquired from limited experience.
1 code implementation • 1 Apr 2023 • Awni Altabaa, Taylor Webb, Jonathan Cohen, John Lafferty
An extension of Transformers is proposed that enables explicit relational reasoning through a novel module called the Abstractor.
1 code implementation • 16 Mar 2023 • Awni Altabaa, Bora Yongacoglu, Serdar Yüksel
Stochastic games are a popular framework for studying multi-agent reinforcement learning (MARL).