Search Results for author: Shay B. Cohen

Found 91 papers, 46 papers with code

Universal Discourse Representation Structure Parsing

no code implementations CL (ACL) 2021 Jiangming Liu, Shay B. Cohen, Mirella Lapata, Johan Bos

Abstract We consider the task of crosslingual semantic parsing in the style of Discourse Representation Theory (DRT) where knowledge from annotated corpora in a resource-rich language is transferred via bitext to guide learning in other languages.

Semantic Parsing

Open-Domain Contextual Link Prediction and its Complementarity with Entailment Graphs

1 code implementation Findings (EMNLP) 2021 Mohammad Javad Hosseini, Shay B. Cohen, Mark Johnson, Mark Steedman

In this paper, we introduce the new task of open-domain contextual link prediction which has access to both the textual context and the KG structure to perform link prediction.

Link Prediction

A Root of a Problem: Optimizing Single-Root Dependency Parsing

1 code implementation EMNLP 2021 Miloš Stanojević, Shay B. Cohen

We describe two approaches to single-root dependency parsing that yield significant speed ups in such parsing.

Dependency Parsing

Think While You Write: Hypothesis Verification Promotes Faithful Knowledge-to-Text Generation

no code implementations16 Nov 2023 Yifu Qiu, Varun Embar, Shay B. Cohen, Benjamin Han

Neural knowledge-to-text generation models often struggle to faithfully generate descriptions for the input facts: they may produce hallucinations that contradict the given facts, or describe facts not present in the input.

Natural Language Inference Text Generation

Can Large Language Models Follow Concept Annotation Guidelines? A Case Study on Scientific and Financial Domains

no code implementations15 Nov 2023 Marcio Fonseca, Shay B. Cohen

Although large language models (LLMs) exhibit remarkable capacity to leverage in-context demonstrations, it is still unclear to what extent they can learn new concepts or facts from ground-truth labels.

counterfactual Sentence Classification

Are Large Language Models Temporally Grounded?

1 code implementation14 Nov 2023 Yifu Qiu, Zheng Zhao, Yftah Ziser, Anna Korhonen, Edoardo M. Ponti, Shay B. Cohen

Instead, we provide LLMs with textual narratives and probe them with respect to their common-sense knowledge of the structure and duration of events, their ability to order events along a timeline, and self-consistency within their temporal model (e. g., temporal relations such as after and before are mutually exclusive for any pair of events).

Common Sense Reasoning Sentence Ordering

A Joint Matrix Factorization Analysis of Multilingual Representations

1 code implementation24 Oct 2023 Zheng Zhao, Yftah Ziser, Bonnie Webber, Shay B. Cohen

Using this tool, we study to what extent and how morphosyntactic features are reflected in the representations learned by multilingual pre-trained models.

Knowledge Base Question Answering for Space Debris Queries

1 code implementation31 May 2023 Paul Darm, Antonio Valerio Miceli-Barone, Shay B. Cohen, Annalisa Riccardi

In this work we present a system, developed for the European Space Agency (ESA), that can answer complex natural language queries, to support engineers in accessing the information contained in a KB that models the orbital space debris environment.

Knowledge Base Question Answering Natural Language Queries

Sentence-Incremental Neural Coreference Resolution

1 code implementation26 May 2023 Matt Grenander, Shay B. Cohen, Mark Steedman

We propose a sentence-incremental neural coreference resolution system which incrementally builds clusters after marking mention boundaries in a shift-reduce method.


The Larger They Are, the Harder They Fail: Language Models do not Recognize Identifier Swaps in Python

1 code implementation24 May 2023 Antonio Valerio Miceli-Barone, Fazl Barez, Ioannis Konstas, Shay B. Cohen

Large Language Models (LLMs) have successfully been applied to code generation tasks, raising the question of how well these models understand programming.

Code Generation

Detecting and Mitigating Hallucinations in Multilingual Summarisation

1 code implementation23 May 2023 Yifu Qiu, Yftah Ziser, Anna Korhonen, Edoardo M. Ponti, Shay B. Cohen

With the existing faithful metrics focusing on English, even measuring the extent of this phenomenon in cross-lingual settings is hard.

Cross-Lingual Transfer

Causal Explanations for Sequential Decision-Making in Multi-Agent Systems

1 code implementation21 Feb 2023 Balint Gyevnar, Cheng Wang, Christopher G. Lucas, Shay B. Cohen, Stefano V. Albrecht

We present CEMA: Causal Explanations in Multi-Agent systems; a general framework to create causal explanations for an agent's decisions in sequential multi-agent systems.

Autonomous Driving counterfactual +2

BERT is not The Count: Learning to Match Mathematical Statements with Proofs

1 code implementation18 Feb 2023 Weixian Waylon Li, Yftah Ziser, Maximin Coavoux, Shay B. Cohen

While the first decoding method matches a proof to a statement without being aware of other statements or proofs, the second method treats the task as a global matching problem.

Information Retrieval Retrieval

Abstractive Summarization Guided by Latent Hierarchical Document Structure

1 code implementation17 Nov 2022 Yifu Qiu, Shay B. Cohen

Sequential abstractive neural summarizers often do not use the underlying structure in the input article or dependencies between the input sentences.

Abstractive Text Summarization

Understanding Domain Learning in Language Models Through Subpopulation Analysis

1 code implementation22 Oct 2022 Zheng Zhao, Yftah Ziser, Shay B. Cohen

We investigate how different domains are encoded in modern neural network architectures.

Language Modelling

Factorizing Content and Budget Decisions in Abstractive Summarization of Long Documents

1 code implementation25 May 2022 Marcio Fonseca, Yftah Ziser, Shay B. Cohen

We argue that disentangling content selection from the budget used to cover salient content improves the performance and applicability of abstractive summarizers.

Abstractive Text Summarization Disentanglement +2

On the Trade-off between Redundancy and Local Coherence in Summarization

1 code implementation20 May 2022 Ronald Cardenas, Matthias Galle, Shay B. Cohen

Extractive summarization systems are known to produce poorly coherent and, if not accounted for, highly redundant text.

Extractive Summarization Reading Comprehension +1

Gold Doesn't Always Glitter: Spectral Removal of Linear and Nonlinear Guarded Attribute Information

1 code implementation15 Mar 2022 Shun Shao, Yftah Ziser, Shay B. Cohen

We describe a simple and effective method (Spectral Attribute removaL; SAL) to remove private or guarded information from neural representations.

Co-training an Unsupervised Constituency Parser with Weak Supervision

1 code implementation Findings (ACL) 2022 Nickil Maveli, Shay B. Cohen

We introduce a method for unsupervised parsing that relies on bootstrapping classifiers to identify if a node dominates a specific span in a sentence.

Constituency Grammar Induction Inductive Bias

Text Generation from Discourse Representation Structures

no code implementations NAACL 2021 Jiangming Liu, Shay B. Cohen, Mirella Lapata

We propose neural models to generate text from formal meaning representations based on Discourse Representation Structures (DRSs).

Text Generation

Unsupervised Extractive Summarization by Human Memory Simulation

no code implementations16 Apr 2021 Ronald Cardenas, Matthias Galle, Shay B. Cohen

We introduce a wide range of heuristics that leverage cognitive representations of content units and how these are retained or forgotten in human memory.

Extractive Summarization Unsupervised Extractive Summarization

Narration Generation for Cartoon Videos

no code implementations17 Jan 2021 Nikos Papasarantopoulos, Shay B. Cohen

Research on text generation from multimodal inputs has largely focused on static images, and less on video data.

Text Generation

A Differentiable Relaxation of Graph Segmentation and Alignment for AMR Parsing

no code implementations EMNLP 2021 Chunchuan Lyu, Shay B. Cohen, Ivan Titov

In contrast, we treat both alignment and segmentation as latent variables in our model and induce them as part of end-to-end training.

AMR Parsing Segmentation

Nonparametric Learning of Two-Layer ReLU Residual Units

1 code implementation17 Aug 2020 Zhunxuan Wang, Linyun He, Chunchuan Lyu, Shay B. Cohen

We describe an algorithm that learns two-layer residual units using rectified linear unit (ReLU) activation: suppose the input $\mathbf{x}$ is from a distribution with support space $\mathbb{R}^d$ and the ground-truth generative model is a residual unit of this type, given by $\mathbf{y} = \boldsymbol{B}^\ast\left[\left(\boldsymbol{A}^\ast\mathbf{x}\right)^+ + \mathbf{x}\right]$, where ground-truth network parameters $\boldsymbol{A}^\ast \in \mathbb{R}^{d\times d}$ represent a full-rank matrix with nonnegative entries and $\boldsymbol{B}^\ast \in \mathbb{R}^{m\times d}$ is full-rank with $m \geq d$ and for $\boldsymbol{c} \in \mathbb{R}^d$, $[\boldsymbol{c}^{+}]_i = \max\{0, c_i\}$.

Vocal Bursts Valence Prediction

Dscorer: A Fast Evaluation Metric for Discourse Representation Structure Parsing

no code implementations ACL 2020 Jiangming Liu, Shay B. Cohen, Mirella Lapata

Discourse representation structures (DRSs) are scoped semantic representations for texts of arbitrary length.

Learning Dialog Policies from Weak Demonstrations

no code implementations ACL 2020 Gabriel Gordon-Hall, Philip John Gorinski, Shay B. Cohen

Deep reinforcement learning is a promising approach to training a dialog manager, but current methods struggle with the large state and action spaces of multi-domain dialog systems.

Atari Games Q-Learning +2

Multi-Step Inference for Reasoning Over Paragraphs

no code implementations EMNLP 2020 Jiangming Liu, Matt Gardner, Shay B. Cohen, Mirella Lapata

Complex reasoning over text requires understanding and chaining together free-form predicates and logical connectives.

Logical Reasoning

Compositional Languages Emerge in a Neural Iterated Learning Model

1 code implementation ICLR 2020 Yi Ren, Shangmin Guo, Matthieu Labeau, Shay B. Cohen, Simon Kirby

The principle of compositionality, which enables natural language to represent complex concepts via a structured combination of simpler ones, allows us to convey an open-ended set of messages using a limited vocabulary.

Experimenting with Power Divergences for Language Modeling

no code implementations IJCNLP 2019 Matthieu Labeau, Shay B. Cohen

In this paper, we experiment with several families (alpha, beta and gamma) of power divergences, generalized from the KL divergence, for learning language models with an objective different than standard MLE.

Language Modelling

Semantic Role Labeling with Iterative Structure Refinement

1 code implementation IJCNLP 2019 Chunchuan Lyu, Shay B. Cohen, Ivan Titov

Modern state-of-the-art Semantic Role Labeling (SRL) methods rely on expressive sentence encoders (e. g., multi-layer LSTMs) but tend to model only local (if any) interactions between individual argument labeling decisions.

Semantic Role Labeling

What is this Article about? Extreme Summarization with Topic-aware Convolutional Neural Networks

1 code implementation19 Jul 2019 Shashi Narayan, Shay B. Cohen, Mirella Lapata

We introduce 'extreme summarization', a new single-document summarization task which aims at creating a short, one-sentence news summary answering the question ``What is the article about?''.

Document Summarization Extreme Summarization

Discourse Representation Parsing for Sentences and Documents

no code implementations ACL 2019 Jiangming Liu, Shay B. Cohen, Mirella Lapata

We introduce a novel semantic parsing task based on Discourse Representation Theory (DRT; Kamp and Reyle 1993).

Semantic Parsing

Wide-Coverage Neural A* Parsing for Minimalist Grammars

no code implementations ACL 2019 John Torr, Milos Stanojevic, Mark Steedman, Shay B. Cohen

Minimalist Grammars (Stabler, 1997) are a computationally oriented, and rigorous formalisation of many aspects of Chomsky{'}s (1995) Minimalist Program.

Duality of Link Prediction and Entailment Graph Induction

1 code implementation ACL 2019 Mohammad Javad Hosseini, Shay B. Cohen, Mark Johnson, Mark Steedman

The new entailment score outperforms prior state-of-the-art results on a standard entialment dataset and the new link prediction scores show improvements over the raw link prediction scores.

Link Prediction

Obfuscation for Privacy-preserving Syntactic Parsing

1 code implementation WS 2020 Zhifeng Hu, Serhii Havrylov, Ivan Titov, Shay B. Cohen

We introduce an idea for a privacy-preserving transformation on natural language data, inspired by homomorphic encryption.

Privacy Preserving

Structural Neural Encoders for AMR-to-text Generation

2 code implementations NAACL 2019 Marco Damonte, Shay B. Cohen

AMR-to-text generation is a problem recently introduced to the NLP community, in which the goal is to generate sentences from Abstract Meaning Representation (AMR) graphs.

AMR-to-Text Generation Graph-to-Sequence +1

Unlexicalized Transition-based Discontinuous Constituency Parsing

1 code implementation TACL 2019 Maximin Coavoux, Benoît Crabbé, Shay B. Cohen

Lexicalized parsing models are based on the assumptions that (i) constituents are organized around a lexical head (ii) bilexical statistics are crucial to solve ambiguities.

Constituency Parsing

Multilingual Clustering of Streaming News

2 code implementations EMNLP 2018 Sebastião Miranda, Artūrs Znotiņš, Shay B. Cohen, Guntis Barzdins

Clustering news across languages enables efficient media monitoring by aggregating articles from multilingual sources into coherent stories.


Privacy-preserving Neural Representations of Text

1 code implementation EMNLP 2018 Maximin Coavoux, Shashi Narayan, Shay B. Cohen

This article deals with adversarial attacks towards deep learning systems for Natural Language Processing (NLP), in the context of privacy protection.

Privacy Preserving

Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization

3 code implementations EMNLP 2018 Shashi Narayan, Shay B. Cohen, Mirella Lapata

We introduce extreme summarization, a new single-document summarization task which does not favor extractive strategies and calls for an abstractive modeling approach.

Document Summarization Extreme Summarization

Discourse Representation Structure Parsing

1 code implementation ACL 2018 Jiangming Liu, Shay B. Cohen, Mirella Lapata

We introduce an open-domain neural semantic parser which generates formal meaning representations in the style of Discourse Representation Theory (DRT; Kamp and Reyle 1993).

Question Answering Semantic Parsing

Stock Movement Prediction from Tweets and Historical Prices

1 code implementation ACL 2018 Yumo Xu, Shay B. Cohen

Stock movement prediction is a challenging problem: the market is highly stochastic, and we make temporally-dependent predictions from chaotic data.

Ranked #2 on Stock Market Prediction on stocknet (using extra training data)

Feature Engineering Time Series Analysis +1

Abstract Meaning Representation for Paraphrase Detection

no code implementations NAACL 2018 Fuad Issa, Marco Damonte, Shay B. Cohen, Xiaohui Yan, Yi Chang

Abstract Meaning Representation (AMR) parsing aims at abstracting away from the syntactic realization of a sentence, and denote only its meaning in a canonical form.

AMR Parsing

Ranking Sentences for Extractive Summarization with Reinforcement Learning

1 code implementation NAACL 2018 Shashi Narayan, Shay B. Cohen, Mirella Lapata

In this paper we conceptualize extractive summarization as a sentence ranking task and propose a novel training algorithm which globally optimizes the ROUGE evaluation metric through a reinforcement learning objective.

Document Summarization Extractive Summarization +3

Learning Typed Entailment Graphs with Global Soft Constraints

1 code implementation TACL 2018 Mohammad Javad Hosseini, Nathanael Chambers, Siva Reddy, Xavier R. Holt, Shay B. Cohen, Mark Johnson, Mark Steedman

We instead propose a scalable method that learns globally consistent similarity scores based on new soft constraints that consider both the structures across typed entailment graphs and inside each graph.

Graph Learning

Whodunnit? Crime Drama as a Case for Natural Language Understanding

1 code implementation TACL 2018 Lea Frermann, Shay B. Cohen, Mirella Lapata

In this paper we argue that crime drama exemplified in television programs such as CSI:Crime Scene Investigation is an ideal testbed for approximating real-world natural language understanding and the complex inferences associated with it.

Natural Language Understanding

Split and Rephrase

2 code implementations EMNLP 2017 Shashi Narayan, Claire Gardent, Shay B. Cohen, Anastasia Shimorina

We propose a new sentence simplification task (Split-and-Rephrase) where the aim is to split a complex sentence into a meaning preserving sequence of shorter sentences.

Machine Translation Split and Rephrase +1

Neural Extractive Summarization with Side Information

1 code implementation14 Apr 2017 Shashi Narayan, Nikos Papasarantopoulos, Shay B. Cohen, Mirella Lapata

Most extractive summarization methods focus on the main body of the document from which sentences need to be extracted.

Document Summarization Extractive Summarization +2

Cross-lingual Abstract Meaning Representation Parsing

1 code implementation NAACL 2018 Marco Damonte, Shay B. Cohen

Abstract Meaning Representation (AMR) annotation efforts have mostly focused on English.

Optimizing Spectral Learning for Parsing

no code implementations ACL 2016 Shashi Narayan, Shay B. Cohen

We describe a search algorithm for optimizing the number of latent states when estimating latent-variable PCFGs with spectral methods.

Paraphrase Generation from Latent-Variable PCFGs for Semantic Parsing

no code implementations WS 2016 Shashi Narayan, Siva Reddy, Shay B. Cohen

One of the limitations of semantic parsing approaches to open-domain question answering is the lexicosyntactic gap between natural language questions and knowledge base entries -- there are many ways to ask a question, all with the same answer.

Open-Domain Question Answering Paraphrase Generation +1

Low-Rank Approximation of Weighted Tree Automata

no code implementations4 Nov 2015 Guillaume Rabusseau, Borja Balle, Shay B. Cohen

We describe a technique to minimize weighted tree automata (WTA), a powerful formalisms that subsumes probabilistic context-free grammars (PCFGs) and latent-variable PCFGs.

Encoding Prior Knowledge with Eigenword Embeddings

no code implementations TACL 2016 Dominique Osborne, Shashi Narayan, Shay B. Cohen

Canonical correlation analysis (CCA) is a method for reducing the dimension of data represented using two views.

Test Word Embeddings

Diversity in Spectral Learning for Natural Language Parsing

no code implementations EMNLP 2015 Shashi Narayan, Shay B. Cohen

We describe an approach to create a diverse set of predictions with spectral learning of latent-variable PCFGs (L-PCFGs).

Parsing Linear Context-Free Rewriting Systems with Fast Matrix Multiplication

no code implementations CL 2016 Shay B. Cohen, Daniel Gildea

Our result provides another proof for the best known result for parsing mildly context sensitive formalisms such as combinatory categorial grammars, head grammars, linear indexed grammars, and tree adjoining grammars, which can be parsed in time $O(n^{4. 76})$.

The Visualization of Change in Word Meaning over Time using Temporal Word Embeddings

no code implementations18 Oct 2014 Chiraag Lala, Shay B. Cohen

We describe a visualization tool that can be used to view the change in meaning of words over time.

Word Embeddings

Online Adaptor Grammars with Hybrid Inference

no code implementations TACL 2014 Ke Zhai, Jordan Boyd-Graber, Shay B. Cohen

Adaptor grammars are a flexible, powerful formalism for defining nonparametric, unsupervised models of grammar productions.

Topic Models Variational Inference

Tensor Decomposition for Fast Parsing with Latent-Variable PCFGs

no code implementations NeurIPS 2012 Michael Collins, Shay B. Cohen

We describe an approach to speed-up inference with latent variable PCFGs, which have been shown to be highly effective for natural language parsing.

Tensor Decomposition

Empirical Risk Minimization with Approximations of Probabilistic Grammars

no code implementations NeurIPS 2010 Noah A. Smith, Shay B. Cohen

Probabilistic grammars are generative statistical models that are useful for compositional and sequential structures.

Cannot find the paper you are looking for? You can Submit a new open access paper.