Search Results for author: Shashi Narayan

Found 37 papers, 15 papers with code

A Thorough Evaluation of Task-Specific Pretraining for Summarization

no code implementations EMNLP 2021 Sascha Rothe, Joshua Maynez, Shashi Narayan

Task-agnostic pretraining objectives like masked language models or corrupted span prediction are applicable to a wide range of NLP downstream tasks (Raffel et al., 2019), but are outperformed by task-specific pretraining objectives like predicting extracted gap sentences on summarization (Zhang et al., 2020).

A Well-Composed Text is Half Done! Composition Sampling for Diverse Conditional Generation

1 code implementation ACL 2022 Shashi Narayan, Gonçalo Simões, Yao Zhao, Joshua Maynez, Dipanjan Das, Michael Collins, Mirella Lapata

We propose Composition Sampling, a simple but effective method to generate diverse outputs for conditional generation of higher quality compared to previous stochastic decoding strategies.

Question Generation

MiRANews: Dataset and Benchmarks for Multi-Resource-Assisted News Summarization

1 code implementation Findings (EMNLP) 2021 Xinnuo Xu, Ondřej Dušek, Shashi Narayan, Verena Rieser, Ioannis Konstas

We show via data analysis that it's not only the models which are to blame: more than 27% of facts mentioned in the gold summaries of MiRANews are better grounded on assisting documents than in the main source articles.

Document Summarization Multi-Document Summarization +1

Planning with Learned Entity Prompts for Abstractive Summarization

no code implementations15 Apr 2021 Shashi Narayan, Yao Zhao, Joshua Maynez, Gonçalo Simoes, Vitaly Nikolaev, Ryan Mcdonald

Moreover, we demonstrate empirically that planning with entity chains provides a mechanism to control hallucinations in abstractive summaries.

Abstractive Text Summarization Text Generation

On Faithfulness and Factuality in Abstractive Summarization

2 code implementations ACL 2020 Joshua Maynez, Shashi Narayan, Bernd Bohnet, Ryan Mcdonald

It is well known that the standard likelihood training and approximate decoding objectives in neural text generation models lead to less human-like responses for open-ended tasks such as language modeling and story generation.

Abstractive Text Summarization Document Summarization +3

QURIOUS: Question Generation Pretraining for Text Generation

no code implementations23 Apr 2020 Shashi Narayan, Gonçalo Simoes, Ji Ma, Hannah Craighead, Ryan Mcdonald

Recent trends in natural language processing using pretraining have shifted focus towards pretraining and fine-tuning approaches for text generation.

Abstractive Text Summarization Language Modelling +2

Sticking to the Facts: Confident Decoding for Faithful Data-to-Text Generation

no code implementations19 Oct 2019 Ran Tian, Shashi Narayan, Thibault Sellam, Ankur P. Parikh

We address the issue of hallucination in data-to-text generation, i. e., reducing the generation of text that is unsupported by the source.

Data-to-Text Generation

What is this Article about? Extreme Summarization with Topic-aware Convolutional Neural Networks

1 code implementation19 Jul 2019 Shashi Narayan, Shay B. Cohen, Mirella Lapata

We introduce 'extreme summarization', a new single-document summarization task which aims at creating a short, one-sentence news summary answering the question ``What is the article about?''.

Document Summarization Extreme Summarization

HighRES: Highlight-based Reference-less Evaluation of Summarization

1 code implementation ACL 2019 Hardy, Shashi Narayan, Andreas Vlachos

There has been substantial progress in summarization research enabled by the availability of novel, often large-scale, datasets and recent advances on neural network-based approaches.

Privacy-preserving Neural Representations of Text

1 code implementation EMNLP 2018 Maximin Coavoux, Shashi Narayan, Shay B. Cohen

This article deals with adversarial attacks towards deep learning systems for Natural Language Processing (NLP), in the context of privacy protection.

Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization

3 code implementations EMNLP 2018 Shashi Narayan, Shay B. Cohen, Mirella Lapata

We introduce extreme summarization, a new single-document summarization task which does not favor extractive strategies and calls for an abstractive modeling approach.

Document Summarization Extreme Summarization

Deep Learning Approaches to Text Production

no code implementations NAACL 2018 Claire Gardent, Shashi Narayan

Each text production task raises a slightly different communication goal (e. g, how to take the dialogue context into account when producing a dialogue turn; how to detect and merge relevant information when summarising a text; or how to produce a well-formed text that correctly capture the information contained in some input data in the case of data-to-text generation).

Data-to-Text Generation Machine Translation +3

Ranking Sentences for Extractive Summarization with Reinforcement Learning

1 code implementation NAACL 2018 Shashi Narayan, Shay B. Cohen, Mirella Lapata

In this paper we conceptualize extractive summarization as a sentence ranking task and propose a novel training algorithm which globally optimizes the ROUGE evaluation metric through a reinforcement learning objective.

Document Summarization Extractive Summarization +2

Split and Rephrase

2 code implementations EMNLP 2017 Shashi Narayan, Claire Gardent, Shay B. Cohen, Anastasia Shimorina

We propose a new sentence simplification task (Split-and-Rephrase) where the aim is to split a complex sentence into a meaning preserving sequence of shorter sentences.

Machine Translation Split and Rephrase +1

Creating Training Corpora for NLG Micro-Planners

no code implementations ACL 2017 Claire Gardent, Anastasia Shimorina, Shashi Narayan, Laura Perez-Beltrachini

In this paper, we present a novel framework for semi-automatically creating linguistically challenging micro-planning data-to-text corpora from existing Knowledge Bases.

Data-to-Text Generation Referring Expression +2

Neural Extractive Summarization with Side Information

1 code implementation14 Apr 2017 Shashi Narayan, Nikos Papasarantopoulos, Shay B. Cohen, Mirella Lapata

Most extractive summarization methods focus on the main body of the document from which sentences need to be extracted.

Document Summarization Extractive Summarization +2

Optimizing Spectral Learning for Parsing

no code implementations ACL 2016 Shashi Narayan, Shay B. Cohen

We describe a search algorithm for optimizing the number of latent states when estimating latent-variable PCFGs with spectral methods.

Paraphrase Generation from Latent-Variable PCFGs for Semantic Parsing

no code implementations WS 2016 Shashi Narayan, Siva Reddy, Shay B. Cohen

One of the limitations of semantic parsing approaches to open-domain question answering is the lexicosyntactic gap between natural language questions and knowledge base entries -- there are many ways to ask a question, all with the same answer.

Open-Domain Question Answering Paraphrase Generation +1

Encoding Prior Knowledge with Eigenword Embeddings

no code implementations TACL 2016 Dominique Osborne, Shashi Narayan, Shay B. Cohen

Canonical correlation analysis (CCA) is a method for reducing the dimension of data represented using two views.

Word Embeddings

Unsupervised Sentence Simplification Using Deep Semantics

1 code implementation WS 2016 Shashi Narayan, Claire Gardent

We present a novel approach to sentence simplification which departs from previous work in two main ways.

Text Simplification

Diversity in Spectral Learning for Natural Language Parsing

no code implementations EMNLP 2015 Shashi Narayan, Shay B. Cohen

We describe an approach to create a diverse set of predictions with spectral learning of latent-variable PCFGs (L-PCFGs).

Cannot find the paper you are looking for? You can Submit a new open access paper.