no code implementations • 14 Jan 2022 • Alon Talmor, Ori Yoran, Ronan Le Bras, Chandra Bhagavatula, Yoav Goldberg, Yejin Choi, Jonathan Berant
Constructing benchmarks that test the abilities of modern natural language understanding models is difficult - pre-trained language models exploit artifacts in benchmarks to achieve human parity, but still fail on adversarial examples and make errors that demonstrate a lack of common sense.
1 code implementation • ACL 2022 • Ori Yoran, Alon Talmor, Jonathan Berant
Models pre-trained with a language modeling objective possess ample world knowledge and language skills, but are known to struggle in tasks that require reasoning.
no code implementations • ICLR 2021 • Alon Talmor, Ori Yoran, Amnon Catav, Dan Lahav, Yizhong Wang, Akari Asai, Gabriel Ilharco, Hannaneh Hajishirzi, Jonathan Berant
When answering complex questions, people can seamlessly combine information from visual, textual and tabular sources.
1 code implementation • NeurIPS 2020 • Alon Talmor, Oyvind Tafjord, Peter Clark, Yoav Goldberg, Jonathan Berant
In this work, we provide a first demonstration that LMs can be trained to reliably perform systematic reasoning combining both implicit, pre-trained knowledge and explicit natural language statements.
2 code implementations • 31 Dec 2019 • Alon Talmor, Yanai Elazar, Yoav Goldberg, Jonathan Berant
A fundamental challenge is to understand whether the performance of a LM on a task should be attributed to the pre-trained representations or to the process of fine-tuning on the task data.
no code implementations • 29 Dec 2019 • Dheeru Dua, Ananth Gottumukkala, Alon Talmor, Sameer Singh, Matt Gardner
A lot of diverse reading comprehension datasets have recently been introduced to study various phenomena in natural language, ranging from simple paraphrase matching and entity typing to entity tracking and understanding the implications of the context.
no code implementations • WS 2019 • Matt Gardner, Jonathan Berant, Hannaneh Hajishirzi, Alon Talmor, Sewon Min
In this work, we justify a question answering approach to reading comprehension and describe the various kinds of questions one might use to more fully test a system{'}s comprehension of a passage, moving beyond questions that only probe local predicate-argument structures.
no code implementations • WS 2019 • Dheeru Dua, Ananth Gottumukkala, Alon Talmor, Sameer Singh, Matt Gardner
A lot of diverse reading comprehension datasets have recently been introduced to study various phenomena in natural language, ranging from simple paraphrase matching and entity typing to entity tracking and understanding the implications of the context.
1 code implementation • WS 2019 • Adam Fisch, Alon Talmor, Robin Jia, Minjoon Seo, Eunsol Choi, Danqi Chen
We present the results of the Machine Reading for Question Answering (MRQA) 2019 shared task on evaluating the generalization capabilities of reading comprehension systems.
no code implementations • 25 Sep 2019 • Matt Gardner, Jonathan Berant, Hannaneh Hajishirzi, Alon Talmor, Sewon Min
In this opinion piece, we argue that question answering should be considered a format which is sometimes useful for studying particular phenomena, not a phenomenon or task in itself.
1 code implementation • ACL 2019 • Alon Talmor, Jonathan Berant
A large number of reading comprehension (RC) datasets has been created recently, but little analysis has been done on whether they generalize to one another, and the extent to which existing datasets can be leveraged for improving performance on new ones.
4 code implementations • NAACL 2019 • Alon Talmor, Jonathan Herzig, Nicholas Lourie, Jonathan Berant
To investigate question answering with prior knowledge, we present CommonsenseQA: a challenging new dataset for commonsense question answering.
Ranked #31 on Common Sense Reasoning on CommonsenseQA (using extra training data)
1 code implementation • 25 Jul 2018 • Alon Talmor, Jonathan Berant
Recently, Talmor and Berant (2018) introduced ComplexWebQuestions - a dataset focused on answering complex questions by decomposing them into a sequence of simpler questions and extracting the answer from retrieved web snippets.
2 code implementations • NAACL 2018 • Alon Talmor, Jonathan Berant
In this paper, we present a novel framework for answering broad and complex questions, assuming answering simple questions is possible using a search engine and a reading comprehension model.
1 code implementation • SEMEVAL 2017 • Alon Talmor, Mor Geva, Jonathan Berant
Semantic parsing shines at analyzing complex natural language that involves composition and computation over multiple pieces of evidence.
Ranked #1 on Question Answering on COMPLEXQUESTIONS