1 code implementation • 23 Jul 2024 • Noah Amsel, Gilad Yehudai, Joan Bruna
Attention-based mechanisms are widely used in machine learning, most prominently in transformers.
1 code implementation • 26 Feb 2021 • Yariv Aizenbud, Ariel Jaffe, Meng Wang, Amber Hu, Noah Amsel, Boaz Nadler, Joseph T. Chang, Yuval Kluger
For large trees, a common approach, termed divide-and-conquer, is to recover the tree structure in two steps.
3 code implementations • 28 Feb 2020 • Ariel Jaffe, Noah Amsel, Yariv Aizenbud, Boaz Nadler, Joseph T. Chang, Yuval Kluger
A common assumption in multiple scientific applications is that the distribution of observed data can be modeled by a latent tree graphical model.
1 code implementation • WS 2019 • William Merrill, Lenny Khazan, Noah Amsel, Yiding Hao, Simon Mendelsohn, Robert Frank
Neural network architectures have been augmented with differentiable stacks in order to introduce a bias toward learning hierarchy-sensitive regularities.
1 code implementation • 4 Jun 2019 • William Merrill, Lenny Khazan, Noah Amsel, Yiding Hao, Simon Mendelsohn, Robert Frank
Neural network architectures have been augmented with differentiable stacks in order to introduce a bias toward learning hierarchy-sensitive regularities.
2 code implementations • WS 2018 • Yiding Hao, William Merrill, Dana Angluin, Robert Frank, Noah Amsel, Andrew Benz, Simon Mendelsohn
This paper analyzes the behavior of stack-augmented recurrent neural network (RNN) models.