no code implementations • 28 Mar 2022 • Santiago Ontanon, Joshua Ainslie, Vaclav Cvicek, Zachary Fisher
Machine learning models such as Transformers or LSTMs struggle with tasks that are compositional in nature such as those involving reasoning/inference.
1 code implementation • ACL 2022 • Santiago Ontañón, Joshua Ainslie, Vaclav Cvicek, Zachary Fisher
Several studies have reported the inability of Transformer models to generalize compositionally, a key type of generalization in many NLP tasks such as semantic parsing.
2 code implementations • EMNLP 2020 • Joshua Ainslie, Santiago Ontanon, Chris Alberti, Vaclav Cvicek, Zachary Fisher, Philip Pham, Anirudh Ravula, Sumit Sanghai, Qifan Wang, Li Yang
Transformer models have advanced the state of the art in many Natural Language Processing (NLP) tasks.
Ranked #3 on Question Answering on ConditionalQA