Search Results for author: Jonathan Berant

Found 85 papers, 52 papers with code

COVR: A test-bed for Visually Grounded Compositional Generalization with real images

no code implementations22 Sep 2021 Ben Bogin, Shivanshu Gupta, Matt Gardner, Jonathan Berant

Due to the automatic generation process, COVR facilitates the creation of compositional splits, where models at test time need to generalize to new concepts and compositions in a zero- or few-shot setting.

Finding needles in a haystack: Sampling Structurally-diverse Training Sets from Synthetic Data for Compositional Generalization

1 code implementation6 Sep 2021 Inbar Oren, Jonathan Herzig, Jonathan Berant

We evaluate our approach on a new split of the schema2QA dataset, and show that it leads to dramatic improvements in compositional generalization as well as moderate improvements in the traditional i. i. d setup.

Semantic Parsing

Break, Perturb, Build: Automatic Perturbation of Reasoning Paths Through Question Decomposition

1 code implementation29 Jul 2021 Mor Geva, Tomer Wolfson, Jonathan Berant

We evaluate a range of RC models on our evaluation sets, which reveals large performance gaps on generated examples compared to the original data.

Natural Language Understanding Reading Comprehension

Turning Tables: Generating Examples from Semi-structured Tables for Endowing Language Models with Reasoning Skills

1 code implementation15 Jul 2021 Ori Yoran, Alon Talmor, Jonathan Berant

Models pre-trained with a language modeling objective possess ample world knowledge and language skills, but are known to struggle in tasks that require reasoning.

Language Modelling Reading Comprehension

Memory-efficient Transformers via Top-$k$ Attention

1 code implementation13 Jun 2021 Ankit Gupta, Guy Dar, Shaya Goodman, David Ciprut, Jonathan Berant

Following the success of dot-product attention in Transformers, numerous approximations have been recently proposed to address its quadratic complexity with respect to the input length.

Question Decomposition with Dependency Graphs

1 code implementation17 Apr 2021 Matan Hasson, Jonathan Berant

In this work, we present a QDMR parser that is based on dependency graphs (DGs), where nodes in the graph are words and edges describe logical relations that correspond to the different computation steps.

What's in your Head? Emergent Behaviour in Multi-Task Transformer Models

no code implementations13 Apr 2021 Mor Geva, Uri Katz, Aviv Ben-Arie, Jonathan Berant

In this work, we examine the behaviour of non-target heads, that is, the output of heads when given input that belongs to a different task than the one they were trained for.

Language Modelling Question Answering

Achieving Model Robustness through Discrete Adversarial Training

1 code implementation11 Apr 2021 Maor Ivgi, Jonathan Berant

Discrete adversarial attacks are symbolic perturbations to a language input that preserve the output label but lead to a prediction error.

Value-aware Approximate Attention

1 code implementation17 Mar 2021 Ankit Gupta, Jonathan Berant

Following the success of dot-product attention in Transformers, numerous approximations have been recently proposed to address its quadratic complexity with respect to the input length.

Language Modelling

BERTese: Learning to Speak to BERT

no code implementations EACL 2021 Adi Haviv, Jonathan Berant, Amir Globerson

In this work, we propose a method for automatically rewriting queries into "BERTese", a paraphrase query that is directly optimized towards better knowledge extraction.

Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies

1 code implementation6 Jan 2021 Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot, Dan Roth, Jonathan Berant

A key limitation in current datasets for multi-hop reasoning is that the required steps for answering the question are mentioned in it explicitly.

Question Answering

Few-Shot Question Answering by Pretraining Span Selection

2 code implementations ACL 2021 Ori Ram, Yuval Kirstain, Jonathan Berant, Amir Globerson, Omer Levy

Given a passage with multiple sets of recurring spans, we mask in each set all recurring spans but one, and ask the model to select the correct span in the passage for each masked span.

Question Answering

Transformer Feed-Forward Layers Are Key-Value Memories

1 code implementation29 Dec 2020 Mor Geva, Roei Schuster, Jonathan Berant, Omer Levy

Feed-forward layers constitute two-thirds of a transformer model's parameters, yet their role in the network remains under-explored.

SmBoP: Semi-autoregressive Bottom-up Semantic Parsing

1 code implementation NAACL 2021 Ohad Rubin, Jonathan Berant

We apply SmBoP on Spider, a challenging zero-shot semantic parsing benchmark, and show that SmBoP leads to a 2. 2x speed-up in decoding time and a $\sim$5x speed-up in training time, compared to a semantic parser that uses autoregressive decoding.

Semantic Parsing

Evaluating NLP Models via Contrast Sets

no code implementations1 Oct 2020 Matt Gardner, Yoav Artzi, Victoria Basmova, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, Nitish Gupta, Hanna Hajishirzi, Gabriel Ilharco, Daniel Khashabi, Kevin Lin, Jiangming Liu, Nelson F. Liu, Phoebe Mulcaire, Qiang Ning, Sameer Singh, Noah A. Smith, Sanjay Subramanian, Reut Tsarfaty, Eric Wallace, A. Zhang, Ben Zhou

Unfortunately, when a dataset has systematic gaps (e. g., annotation artifacts), these evaluations are misleading: a model can learn simple decision rules that perform well on the test set but do not capture a dataset's intended capabilities.

Reading Comprehension Sentiment Analysis

Learning Object Detection from Captions via Textual Scene Attributes

no code implementations30 Sep 2020 Achiya Jerbi, Roei Herzig, Jonathan Berant, Gal Chechik, Amir Globerson

In this work, we argue that captions contain much richer information about the image, including attributes of objects and their relations.

Image Captioning Object Detection

Scene Graph to Image Generation with Contextualized Object Layout Refinement

no code implementations23 Sep 2020 Maor Ivgi, Yaniv Benny, Avichai Ben-David, Jonathan Berant, Lior Wolf

We empirically show on the COCO-STUFF dataset that our approach improves the quality of both the intermediate layout and the final image.

Image Generation

Span-based Semantic Parsing for Compositional Generalization

1 code implementation ACL 2021 Jonathan Herzig, Jonathan Berant

Despite the success of sequence-to-sequence (seq2seq) models in semantic parsing, recent work has shown that they fail in compositional generalization, i. e., the ability to generalize to new structures built of components observed during training.

Semantic Parsing

A Simple Global Neural Discourse Parser

no code implementations2 Sep 2020 Yichu Zhou, Omri Koshorek, Vivek Srikumar, Jonathan Berant

Discourse parsing is largely dominated by greedy parsers with manually-designed features, while global parsing is rare due to its computational expense.

Discourse Parsing

Latent Compositional Representations Improve Systematic Generalization in Grounded Question Answering

1 code implementation1 Jul 2020 Ben Bogin, Sanjay Subramanian, Matt Gardner, Jonathan Berant

However, state-of-the-art models in grounded question answering often do not explicitly perform decomposition, leading to difficulties in generalization to out-of-distribution examples.

Question Answering Systematic Generalization

Leap-Of-Thought: Teaching Pre-Trained Models to Systematically Reason Over Implicit Knowledge

1 code implementation NeurIPS 2020 Alon Talmor, Oyvind Tafjord, Peter Clark, Yoav Goldberg, Jonathan Berant

In this work, we provide a first demonstration that LMs can be trained to reliably perform systematic reasoning combining both implicit, pre-trained knowledge and explicit natural language statements.

GMAT: Global Memory Augmentation for Transformers

1 code implementation5 Jun 2020 Ankit Gupta, Jonathan Berant

Moreover, global memory can also be used for sequence compression, by representing a long input sequence with the memory representations only.

Language Modelling Reading Comprehension

Obtaining Faithful Interpretations from Compositional Neural Networks

1 code implementation ACL 2020 Sanjay Subramanian, Ben Bogin, Nitish Gupta, Tomer Wolfson, Sameer Singh, Jonathan Berant, Matt Gardner

Neural module networks (NMNs) are a popular approach for modeling compositionality: they achieve high accuracy when applied to problems in language and vision, while reflecting the compositional structure of the problem in the network architecture.

Explaining Question Answering Models through Text Generation

1 code implementation12 Apr 2020 Veronica Latcinnik, Jonathan Berant

Large pre-trained language models (LMs) have been shown to perform surprisingly well when fine-tuned on tasks that require commonsense and world knowledge.

Question Answering Text Generation

Injecting Numerical Reasoning Skills into Language Models

2 code implementations ACL 2020 Mor Geva, Ankit Gupta, Jonathan Berant

In this work, we show that numerical reasoning is amenable to automatic data generation, and thus one can inject this skill into pre-trained LMs, by generating large amounts of data, and training in a multi-task setup.

Data Augmentation Language Modelling +1

Evaluating the Evaluation of Diversity in Natural Language Generation

1 code implementation EACL 2021 Guy Tevet, Jonathan Berant

Despite growing interest in natural language generation (NLG) models that produce diverse outputs, there is currently no principled method for evaluating the diversity of an NLG system.

Text Generation

oLMpics -- On what Language Model Pre-training Captures

1 code implementation31 Dec 2019 Alon Talmor, Yanai Elazar, Yoav Goldberg, Jonathan Berant

A fundamental challenge is to understand whether the performance of a LM on a task should be attributed to the pre-trained representations or to the process of fine-tuning on the task data.

Language Modelling

On Making Reading Comprehension More Comprehensive

no code implementations WS 2019 Matt Gardner, Jonathan Berant, Hannaneh Hajishirzi, Alon Talmor, Sewon Min

In this work, we justify a question answering approach to reading comprehension and describe the various kinds of questions one might use to more fully test a system{'}s comprehension of a passage, moving beyond questions that only probe local predicate-argument structures.

Machine Reading Comprehension Question Answering

A Simple and Effective Model for Answering Multi-span Questions

4 code implementations EMNLP 2020 Elad Segal, Avia Efrat, Mor Shoham, Amir Globerson, Jonathan Berant

Models for reading comprehension (RC) commonly restrict their output space to the set of all single contiguous spans from the input, in order to alleviate the learning problem and avoid the need for a model that generates text explicitly.

Question Answering Reading Comprehension

Question Answering is a Format; When is it Useful?

no code implementations25 Sep 2019 Matt Gardner, Jonathan Berant, Hannaneh Hajishirzi, Alon Talmor, Sewon Min

In this opinion piece, we argue that question answering should be considered a format which is sometimes useful for studying particular phenomena, not a phenomenon or task in itself.

Machine Translation Question Answering +4

Don't paraphrase, detect! Rapid and Effective Data Collection for Semantic Parsing

1 code implementation IJCNLP 2019 Jonathan Herzig, Jonathan Berant

Assuming access to unlabeled utterances from the true distribution, we combine crowdsourcing with a paraphrase model to detect correct logical forms for the unlabeled utterances.

Semantic Parsing

MultiQA: An Empirical Investigation of Generalization and Transfer in Reading Comprehension

1 code implementation ACL 2019 Alon Talmor, Jonathan Berant

A large number of reading comprehension (RC) datasets has been created recently, but little analysis has been done on whether they generalize to one another, and the extent to which existing datasets can be leveraged for improving performance on new ones.

Reading Comprehension

Grammar-based Neural Text-to-SQL Generation

no code implementations30 May 2019 Kevin Lin, Ben Bogin, Mark Neumann, Jonathan Berant, Matt Gardner

The sequence-to-sequence paradigm employed by neural text-to-SQL models typically performs token-level decoding and does not consider generating SQL hierarchically from a grammar.

Semantic Parsing Text-To-Sql

Representing Schema Structure with Graph Neural Networks for Text-to-SQL Parsing

1 code implementation ACL 2019 Ben Bogin, Matt Gardner, Jonathan Berant

Research on parsing language to SQL has largely ignored the structure of the database (DB) schema, either because the DB was very simple, or because it was observed at both training and test time.

SQL Parsing Text-To-Sql

White-to-Black: Efficient Distillation of Black-Box Adversarial Attacks

1 code implementation NAACL 2019 Yotam Gil, Yoav Chai, Or Gorodissky, Jonathan Berant

Adversarial examples are important for understanding the behavior of neural models, and can improve their robustness through adversarial training.

DiscoFuse: A Large-Scale Dataset for Discourse-Based Sentence Fusion

1 code implementation NAACL 2019 Mor Geva, Eric Malmi, Idan Szpektor, Jonathan Berant

We author a set of rules for identifying a diverse set of discourse phenomena in raw text, and decomposing the text into two independent sentences.

Sentence Fusion Text Simplification +1

Differentiable Scene Graphs

1 code implementation26 Feb 2019 Moshiko Raboh, Roei Herzig, Gal Chechik, Jonathan Berant, Amir Globerson

In many domains, it is preferable to train systems jointly in an end-to-end manner, but SGs are not commonly used as intermediate components in visual reasoning systems because being discrete and sparse, scene-graph representations are non-differentiable and difficult to optimize.

Visual Reasoning

Neural network gradient-based learning of black-box function interfaces

no code implementations ICLR 2019 Alon Jacovi, Guy Hadash, Einat Kermany, Boaz Carmeli, Ofer Lavi, George Kour, Jonathan Berant

We propose a method for end-to-end training of a base neural network that integrates calls to existing black-box functions.

CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge

1 code implementation NAACL 2019 Alon Talmor, Jonathan Herzig, Nicholas Lourie, Jonathan Berant

To investigate question answering with prior knowledge, we present CommonsenseQA: a challenging new dataset for commonsense question answering.

Ranked #11 on Common Sense Reasoning on CommonsenseQA (using extra training data)

Common Sense Reasoning Question Answering

Value-based Search in Execution Space for Mapping Instructions to Programs

1 code implementation NAACL 2019 Dor Muhlgay, Jonathan Herzig, Jonathan Berant

Training models to map natural language instructions to programs given target world supervision only requires searching for good programs at training time.

Evaluating Text GANs as Language Models

1 code implementation NAACL 2019 Guy Tevet, Gavriel Habib, Vered Shwartz, Jonathan Berant

Generative Adversarial Networks (GANs) are a promising approach for text generation that, unlike traditional language models (LM), does not suffer from the problem of ``exposure bias''.

Text Generation

Emergence of Communication in an Interactive World with Consistent Speakers

1 code implementation3 Sep 2018 Ben Bogin, Mor Geva, Jonathan Berant

Training agents to communicate with one another given task-based supervision only has attracted considerable attention recently, due to the growing interest in developing models for human-agent interaction.

Explaining Queries over Web Tables to Non-Experts

no code implementations14 Aug 2018 Jonathan Berant, Daniel Deutch, Amir Globerson, Tova Milo, Tomer Wolfson

Designing a reliable natural language (NL) interface for querying tables has been a longtime goal of researchers in both the data management and natural language processing (NLP) communities.

Translation

Repartitioning of the ComplexWebQuestions Dataset

no code implementations25 Jul 2018 Alon Talmor, Jonathan Berant

Recently, Talmor and Berant (2018) introduced ComplexWebQuestions - a dataset focused on answering complex questions by decomposing them into a sequence of simpler questions and extracting the answer from retrieved web snippets.

Reading Comprehension

Memory Augmented Policy Optimization for Program Synthesis and Semantic Parsing

4 code implementations NeurIPS 2018 Chen Liang, Mohammad Norouzi, Jonathan Berant, Quoc Le, Ni Lao

We present Memory Augmented Policy Optimization (MAPO), a simple and novel way to leverage a memory buffer of promising trajectories to reduce the variance of policy gradient estimate.

Combinatorial Optimization Program Synthesis +2

Weakly Supervised Semantic Parsing with Abstract Examples

no code implementations ACL 2018 Omer Goldman, Veronica Latcinnik, Ehud Nave, Amir Globerson, Jonathan Berant

Training semantic parsers from weak supervision (denotations) rather than strong supervision (programs) complicates training in two ways.

Semantic Parsing Visual Reasoning

Learning to Search in Long Documents Using Document Structure

1 code implementation COLING 2018 Mor Geva, Jonathan Berant

Reading comprehension models are based on recurrent neural networks that sequentially process the document tokens.

Information Retrieval Question Answering +1

Decoupling Structure and Lexicon for Zero-Shot Semantic Parsing

1 code implementation EMNLP 2018 Jonathan Herzig, Jonathan Berant

Building a semantic parser quickly in a new domain is a fundamental challenge for conversational interfaces, as current semantic parsers require expensive supervision and lack the ability to generalize to new domains.

Semantic Parsing

Text Segmentation as a Supervised Learning Task

2 code implementations NAACL 2018 Omri Koshorek, Adir Cohen, Noam Mor, Michael Rotman, Jonathan Berant

Text segmentation, the task of dividing a document into contiguous segments based on its semantic structure, is a longstanding challenge in language understanding.

Text Segmentation

Polyglot Semantic Parsing in APIs

2 code implementations NAACL 2018 Kyle Richardson, Jonathan Berant, Jonas Kuhn

Traditional approaches to semantic parsing (SP) work by training individual models for each available parallel dataset of text-meaning pairs.

Semantic Parsing Translation

The Web as a Knowledge-base for Answering Complex Questions

1 code implementation NAACL 2018 Alon Talmor, Jonathan Berant

In this paper, we present a novel framework for answering broad and complex questions, assuming answering simple questions is possible using a search engine and a reading comprehension model.

Reading Comprehension

Contextualized Word Representations for Reading Comprehension

1 code implementation NAACL 2018 Shimi Salant, Jonathan Berant

Reading a document and extracting an answer to a question about its content has attracted substantial attention recently.

Language Modelling Question Answering +1

Weakly-supervised Semantic Parsing with Abstract Examples

1 code implementation14 Nov 2017 Omer Goldman, Veronica Latcinnik, Udi Naveh, Amir Globerson, Jonathan Berant

Training semantic parsers from weak supervision (denotations) rather than strong supervision (programs) complicates training in two ways.

Semantic Parsing Visual Reasoning

Inducing Regular Grammars Using Recurrent Neural Networks

1 code implementation28 Oct 2017 Mor Cohen, Avi Caciularu, Idan Rejwan, Jonathan Berant

Grammar induction is the task of learning a grammar from a set of examples.

Evaluating Semantic Parsing against a Simple Web-based Question Answering Model

1 code implementation SEMEVAL 2017 Alon Talmor, Mor Geva, Jonathan Berant

Semantic parsing shines at analyzing complex natural language that involves composition and computation over multiple pieces of evidence.

Question Answering Semantic Parsing

Coarse-to-Fine Question Answering for Long Documents

no code implementations ACL 2017 Eunsol Choi, Daniel Hewlett, Jakob Uszkoreit, Illia Polosukhin, Alex Lacoste, re, Jonathan Berant

We present a framework for question answering that can efficiently scale to longer documents while maintaining or even improving performance of state-of-the-art models.

Question Answering Reading Comprehension

Neural Semantic Parsing over Multiple Knowledge-bases

1 code implementation ACL 2017 Jonathan Herzig, Jonathan Berant

A fundamental challenge in developing semantic parsers is the paucity of strong supervision in the form of language utterances annotated with logical form.

Semantic Parsing

Neural Symbolic Machines: Learning Semantic Parsers on Freebase with Weak Supervision (Short Version)

no code implementations4 Dec 2016 Chen Liang, Jonathan Berant, Quoc Le, Kenneth D. Forbus, Ni Lao

In this work, we propose the Manager-Programmer-Computer framework, which integrates neural networks with non-differentiable memory to support abstract, scalable and precise operations through a friendly neural computer interface.

Feature Engineering Natural Language Understanding +2

Hierarchical Question Answering for Long Documents

no code implementations6 Nov 2016 Eunsol Choi, Daniel Hewlett, Alexandre Lacoste, Illia Polosukhin, Jakob Uszkoreit, Jonathan Berant

We present a framework for question answering that can efficiently scale to longer documents while maintaining or even improving performance of state-of-the-art models.

Question Answering Reading Comprehension

Learning Recurrent Span Representations for Extractive Question Answering

2 code implementations4 Nov 2016 Kenton Lee, Shimi Salant, Tom Kwiatkowski, Ankur Parikh, Dipanjan Das, Jonathan Berant

In this paper, we focus on this answer extraction task, presenting a novel model architecture that efficiently builds fixed length representations of all spans in the evidence document with a recurrent network.

Answer Selection Natural Language Understanding +1

Neural Symbolic Machines: Learning Semantic Parsers on Freebase with Weak Supervision

2 code implementations ACL 2017 Chen Liang, Jonathan Berant, Quoc Le, Kenneth D. Forbus, Ni Lao

Harnessing the statistical power of neural networks to perform language understanding and symbolic reasoning is difficult, when it requires executing efficient discrete operations against a large knowledge-base.

Feature Engineering Structured Prediction

Imitation Learning of Agenda-based Semantic Parsers

1 code implementation TACL 2015 Jonathan Berant, Percy Liang

Semantic parsers conventionally construct logical forms bottom-up in a fixed order, resulting in the generation of many extraneous partial logical forms.

Imitation Learning Question Answering +1

Cannot find the paper you are looking for? You can Submit a new open access paper.