no code implementations • 13 Dec 2024 • Andy Yang, Lena Strobl, David Chiang, Dana Angluin
Second, we demonstrate how temperature scaling allows softmax transformers to simulate a large subclass of average-hard attention transformers, those that have what we call the uniform-tieless property.
1 code implementation • 3 Oct 2024 • Xinting Huang, Andy Yang, Satwik Bhattamishra, Yash Sarrof, Andreas Krebs, Hattie Zhou, Preetum Nakkiran, Michael Hahn
A major challenge for transformers is generalizing to sequences longer than those observed during training.
no code implementations • 5 Apr 2024 • Andy Yang, David Chiang
Deriving formal bounds on the expressivity of transformers, as well as studying transformers that are constructed to implement known algorithms, are both effective methods for better understanding the computational power of transformers.
no code implementations • 21 Oct 2023 • Andy Yang, David Chiang, Dana Angluin
The expressive power of transformers over inputs of unbounded size can be studied through their ability to recognize classes of formal languages.