Search Results for author: Shay B. Cohen

Found 70 papers, 26 papers with code

Text Generation from Discourse Representation Structures

no code implementations NAACL 2021 Jiangming Liu, Shay B. Cohen, Mirella Lapata

We propose neural models to generate text from formal meaning representations based on Discourse Representation Structures (DRSs).

Document-level Text Generation

Unsupervised Extractive Summarization by Human Memory Simulation

no code implementations16 Apr 2021 Ronald Cardenas, Matthias Galle, Shay B. Cohen

We introduce a wide range of heuristics that leverage cognitive representations of content units and how these are retained or forgotten in human memory.

Extractive Summarization

Narration Generation for Cartoon Videos

no code implementations17 Jan 2021 Nikos Papasarantopoulos, Shay B. Cohen

Research on text generation from multimodal inputs has largely focused on static images, and less on video data.

Text Generation

A Differentiable Relaxation of Graph Segmentation and Alignment for AMR Parsing

no code implementations23 Oct 2020 Chunchuan Lyu, Shay B. Cohen, Ivan Titov

In contrast, we treat both alignment and segmentation as latent variables in our model and induce them as part of end-to-end training.

AMR Parsing

Nonparametric Learning of Two-Layer ReLU Residual Units

no code implementations17 Aug 2020 Zhunxuan Wang, Linyun He, Chunchuan Lyu, Shay B. Cohen

We describe an algorithm that learns two-layer residual units with rectified linear unit (ReLU) activation: suppose the input $\mathbf{x}$ is from a distribution with support space $\mathbb{R}^d$ and the ground-truth generative model is such a residual unit, given by \[\mathbf{y}= \boldsymbol{B}^\ast\left[\left(\boldsymbol{A}^\ast\mathbf{x}\right)^+ + \mathbf{x}\right]\text{,}\] where ground-truth network parameters $\boldsymbol{A}^\ast \in \mathbb{R}^{d\times d}$ is a nonnegative full-rank matrix and $\boldsymbol{B}^\ast \in \mathbb{R}^{m\times d}$ is full-rank with $m \geq d$ and for $\mathbf{c} \in \mathbb{R}^d$, $[\mathbf{c}^{+}]_i = \max\{0, c_i\}$.

Dscorer: A Fast Evaluation Metric for Discourse Representation Structure Parsing

no code implementations ACL 2020 Jiangming Liu, Shay B. Cohen, Mirella Lapata

Discourse representation structures (DRSs) are scoped semantic representations for texts of arbitrary length.

Learning Dialog Policies from Weak Demonstrations

no code implementations ACL 2020 Gabriel Gordon-Hall, Philip John Gorinski, Shay B. Cohen

Deep reinforcement learning is a promising approach to training a dialog manager, but current methods struggle with the large state and action spaces of multi-domain dialog systems.

Atari Games Q-Learning

Multi-Step Inference for Reasoning Over Paragraphs

no code implementations EMNLP 2020 Jiangming Liu, Matt Gardner, Shay B. Cohen, Mirella Lapata

Complex reasoning over text requires understanding and chaining together free-form predicates and logical connectives.

Compositional Languages Emerge in a Neural Iterated Learning Model

1 code implementation ICLR 2020 Yi Ren, Shangmin Guo, Matthieu Labeau, Shay B. Cohen, Simon Kirby

The principle of compositionality, which enables natural language to represent complex concepts via a structured combination of simpler ones, allows us to convey an open-ended set of messages using a limited vocabulary.

Experimenting with Power Divergences for Language Modeling

no code implementations IJCNLP 2019 Matthieu Labeau, Shay B. Cohen

In this paper, we experiment with several families (alpha, beta and gamma) of power divergences, generalized from the KL divergence, for learning language models with an objective different than standard MLE.

Language Modelling

Semantic Role Labeling with Iterative Structure Refinement

1 code implementation IJCNLP 2019 Chunchuan Lyu, Shay B. Cohen, Ivan Titov

Modern state-of-the-art Semantic Role Labeling (SRL) methods rely on expressive sentence encoders (e. g., multi-layer LSTMs) but tend to model only local (if any) interactions between individual argument labeling decisions.

Semantic Role Labeling

What is this Article about? Extreme Summarization with Topic-aware Convolutional Neural Networks

1 code implementation19 Jul 2019 Shashi Narayan, Shay B. Cohen, Mirella Lapata

We introduce 'extreme summarization', a new single-document summarization task which aims at creating a short, one-sentence news summary answering the question ``What is the article about?''.

Document Summarization Extreme Summarization

Duality of Link Prediction and Entailment Graph Induction

1 code implementation ACL 2019 Mohammad Javad Hosseini, Shay B. Cohen, Mark Johnson, Mark Steedman

The new entailment score outperforms prior state-of-the-art results on a standard entialment dataset and the new link prediction scores show improvements over the raw link prediction scores.

Link Prediction

Wide-Coverage Neural A* Parsing for Minimalist Grammars

no code implementations ACL 2019 John Torr, Milos Stanojevic, Mark Steedman, Shay B. Cohen

Minimalist Grammars (Stabler, 1997) are a computationally oriented, and rigorous formalisation of many aspects of Chomsky{'}s (1995) Minimalist Program.

Discourse Representation Parsing for Sentences and Documents

no code implementations ACL 2019 Jiangming Liu, Shay B. Cohen, Mirella Lapata

We introduce a novel semantic parsing task based on Discourse Representation Theory (DRT; Kamp and Reyle 1993).

Document-level Semantic Parsing

Obfuscation for Privacy-preserving Syntactic Parsing

1 code implementation WS 2020 Zhifeng Hu, Serhii Havrylov, Ivan Titov, Shay B. Cohen

We introduce an idea for a privacy-preserving transformation on natural language data, inspired by homomorphic encryption.

Structural Neural Encoders for AMR-to-text Generation

2 code implementations NAACL 2019 Marco Damonte, Shay B. Cohen

AMR-to-text generation is a problem recently introduced to the NLP community, in which the goal is to generate sentences from Abstract Meaning Representation (AMR) graphs.

AMR-to-Text Generation Graph-to-Sequence +1

Unlexicalized Transition-based Discontinuous Constituency Parsing

1 code implementation TACL 2019 Maximin Coavoux, Benoît Crabbé, Shay B. Cohen

Lexicalized parsing models are based on the assumptions that (i) constituents are organized around a lexical head (ii) bilexical statistics are crucial to solve ambiguities.

Constituency Parsing

Multilingual Clustering of Streaming News

2 code implementations EMNLP 2018 Sebastião Miranda, Artūrs Znotiņš, Shay B. Cohen, Guntis Barzdins

Clustering news across languages enables efficient media monitoring by aggregating articles from multilingual sources into coherent stories.

Privacy-preserving Neural Representations of Text

1 code implementation EMNLP 2018 Maximin Coavoux, Shashi Narayan, Shay B. Cohen

This article deals with adversarial attacks towards deep learning systems for Natural Language Processing (NLP), in the context of privacy protection.

Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization

2 code implementations EMNLP 2018 Shashi Narayan, Shay B. Cohen, Mirella Lapata

We introduce extreme summarization, a new single-document summarization task which does not favor extractive strategies and calls for an abstractive modeling approach.

Document Summarization Extreme Summarization

Stock Movement Prediction from Tweets and Historical Prices

1 code implementation ACL 2018 Yumo Xu, Shay B. Cohen

Stock movement prediction is a challenging problem: the market is highly stochastic, and we make temporally-dependent predictions from chaotic data.

Feature Engineering Stock Trend Prediction +2

Discourse Representation Structure Parsing

1 code implementation ACL 2018 Jiangming Liu, Shay B. Cohen, Mirella Lapata

We introduce an open-domain neural semantic parser which generates formal meaning representations in the style of Discourse Representation Theory (DRT; Kamp and Reyle 1993).

Question Answering Semantic Parsing

Abstract Meaning Representation for Paraphrase Detection

no code implementations NAACL 2018 Fuad Issa, Marco Damonte, Shay B. Cohen, Xiaohui Yan, Yi Chang

Abstract Meaning Representation (AMR) parsing aims at abstracting away from the syntactic realization of a sentence, and denote only its meaning in a canonical form.

AMR Parsing

Ranking Sentences for Extractive Summarization with Reinforcement Learning

1 code implementation NAACL 2018 Shashi Narayan, Shay B. Cohen, Mirella Lapata

In this paper we conceptualize extractive summarization as a sentence ranking task and propose a novel training algorithm which globally optimizes the ROUGE evaluation metric through a reinforcement learning objective.

Document Summarization Extractive Summarization +1

Learning Typed Entailment Graphs with Global Soft Constraints

1 code implementation TACL 2018 Mohammad Javad Hosseini, Nathanael Chambers, Siva Reddy, Xavier R. Holt, Shay B. Cohen, Mark Johnson, Mark Steedman

We instead propose a scalable method that learns globally consistent similarity scores based on new soft constraints that consider both the structures across typed entailment graphs and inside each graph.

Graph Learning

Whodunnit? Crime Drama as a Case for Natural Language Understanding

1 code implementation TACL 2018 Lea Frermann, Shay B. Cohen, Mirella Lapata

In this paper we argue that crime drama exemplified in television programs such as CSI:Crime Scene Investigation is an ideal testbed for approximating real-world natural language understanding and the complex inferences associated with it.

Natural Language Understanding

Split and Rephrase

1 code implementation EMNLP 2017 Shashi Narayan, Claire Gardent, Shay B. Cohen, Anastasia Shimorina

We propose a new sentence simplification task (Split-and-Rephrase) where the aim is to split a complex sentence into a meaning preserving sequence of shorter sentences.

Machine Translation Split and Rephrase

Cross-lingual Abstract Meaning Representation Parsing

1 code implementation NAACL 2018 Marco Damonte, Shay B. Cohen

Abstract Meaning Representation (AMR) annotation efforts have mostly focused on English.

Neural Extractive Summarization with Side Information

1 code implementation14 Apr 2017 Shashi Narayan, Nikos Papasarantopoulos, Shay B. Cohen, Mirella Lapata

Most extractive summarization methods focus on the main body of the document from which sentences need to be extracted.

Document Summarization Extractive Summarization +1

Optimizing Spectral Learning for Parsing

no code implementations ACL 2016 Shashi Narayan, Shay B. Cohen

We describe a search algorithm for optimizing the number of latent states when estimating latent-variable PCFGs with spectral methods.

Paraphrase Generation from Latent-Variable PCFGs for Semantic Parsing

no code implementations WS 2016 Shashi Narayan, Siva Reddy, Shay B. Cohen

One of the limitations of semantic parsing approaches to open-domain question answering is the lexicosyntactic gap between natural language questions and knowledge base entries -- there are many ways to ask a question, all with the same answer.

Open-Domain Question Answering Paraphrase Generation +1

Low-Rank Approximation of Weighted Tree Automata

no code implementations4 Nov 2015 Guillaume Rabusseau, Borja Balle, Shay B. Cohen

We describe a technique to minimize weighted tree automata (WTA), a powerful formalisms that subsumes probabilistic context-free grammars (PCFGs) and latent-variable PCFGs.

Encoding Prior Knowledge with Eigenword Embeddings

no code implementations TACL 2016 Dominique Osborne, Shashi Narayan, Shay B. Cohen

Canonical correlation analysis (CCA) is a method for reducing the dimension of data represented using two views.

Word Embeddings

Diversity in Spectral Learning for Natural Language Parsing

no code implementations EMNLP 2015 Shashi Narayan, Shay B. Cohen

We describe an approach to create a diverse set of predictions with spectral learning of latent-variable PCFGs (L-PCFGs).

Parsing Linear Context-Free Rewriting Systems with Fast Matrix Multiplication

no code implementations CL 2016 Shay B. Cohen, Daniel Gildea

Our result provides another proof for the best known result for parsing mildly context sensitive formalisms such as combinatory categorial grammars, head grammars, linear indexed grammars, and tree adjoining grammars, which can be parsed in time $O(n^{4. 76})$.

The Visualization of Change in Word Meaning over Time using Temporal Word Embeddings

no code implementations18 Oct 2014 Chiraag Lala, Shay B. Cohen

We describe a visualization tool that can be used to view the change in meaning of words over time.

Word Embeddings

Online Adaptor Grammars with Hybrid Inference

no code implementations TACL 2014 Ke Zhai, Jordan Boyd-Graber, Shay B. Cohen

Adaptor grammars are a flexible, powerful formalism for defining nonparametric, unsupervised models of grammar productions.

Topic Models Variational Inference

Tensor Decomposition for Fast Parsing with Latent-Variable PCFGs

no code implementations NeurIPS 2012 Michael Collins, Shay B. Cohen

We describe an approach to speed-up inference with latent variable PCFGs, which have been shown to be highly effective for natural language parsing.

Tensor Decomposition

Empirical Risk Minimization with Approximations of Probabilistic Grammars

no code implementations NeurIPS 2010 Noah A. Smith, Shay B. Cohen

Probabilistic grammars are generative statistical models that are useful for compositional and sequential structures.

Cannot find the paper you are looking for? You can Submit a new open access paper.