Search Results for author: Shashi Narayan

Found 48 papers, 16 papers with code

A Thorough Evaluation of Task-Specific Pretraining for Summarization

no code implementations EMNLP 2021 Sascha Rothe, Joshua Maynez, Shashi Narayan

Task-agnostic pretraining objectives like masked language models or corrupted span prediction are applicable to a wide range of NLP downstream tasks (Raffel et al., 2019), but are outperformed by task-specific pretraining objectives like predicting extracted gap sentences on summarization (Zhang et al., 2020).

Calibrating Likelihoods towards Consistency in Summarization Models

no code implementations12 Oct 2023 Polina Zablotskaia, Misha Khalman, Rishabh Joshi, Livio Baldini Soares, Shoshana Jakobovits, Joshua Maynez, Shashi Narayan

Despite the recent advances in abstractive text summarization, current summarization models still suffer from generating factually inconsistent summaries, reducing their utility for real-world application.

Abstractive Text Summarization Natural Language Inference

$μ$PLAN: Summarizing using a Content Plan as Cross-Lingual Bridge

no code implementations23 May 2023 Fantine Huot, Joshua Maynez, Chris Alberti, Reinald Kim Amplayo, Priyanka Agrawal, Constanza Fierro, Shashi Narayan, Mirella Lapata

Cross-lingual summarization consists of generating a summary in one language given an input document in a different language, allowing for the dissemination of relevant content across speakers of other languages.

Text-Blueprint: An Interactive Platform for Plan-based Conditional Generation

no code implementations28 Apr 2023 Fantine Huot, Joshua Maynez, Shashi Narayan, Reinald Kim Amplayo, Kuzman Ganchev, Annie Louis, Anders Sandholm, Dipanjan Das, Mirella Lapata

While conditional generation models can now generate natural language well enough to create fluent text, it is still difficult to control the generation process, leading to irrelevant, repetitive, and hallucinated content.

Text Generation

On Uncertainty Calibration and Selective Generation in Probabilistic Neural Summarization: A Benchmark Study

no code implementations17 Apr 2023 Polina Zablotskaia, Du Phan, Joshua Maynez, Shashi Narayan, Jie Ren, Jeremiah Liu

Modern deep models for summarization attains impressive benchmark performance, but they are prone to generating miscalibrated predictive uncertainty.

Probabilistic Deep Learning

Little Red Riding Hood Goes Around the Globe:Crosslingual Story Planning and Generation with Large Language Models

no code implementations20 Dec 2022 Evgeniia Razumovskaia, Joshua Maynez, Annie Louis, Mirella Lapata, Shashi Narayan

Previous work has demonstrated the effectiveness of planning for story generation exclusively in a monolingual setting focusing primarily on English.

Story Generation

mFACE: Multilingual Summarization with Factual Consistency Evaluation

no code implementations20 Dec 2022 Roee Aharoni, Shashi Narayan, Joshua Maynez, Jonathan Herzig, Elizabeth Clark, Mirella Lapata

Abstractive summarization has enjoyed renewed interest in recent years, thanks to pre-trained language models and the availability of large-scale datasets.

Abstractive Text Summarization

Query Refinement Prompts for Closed-Book Long-Form Question Answering

no code implementations31 Oct 2022 Reinald Kim Amplayo, Kellie Webster, Michael Collins, Dipanjan Das, Shashi Narayan

Large language models (LLMs) have been shown to perform well in answering questions and in producing long-form texts, both in few-shot closed-book settings.

Long Form Question Answering

SMART: Sentences as Basic Units for Text Evaluation

no code implementations1 Aug 2022 Reinald Kim Amplayo, Peter J. Liu, Yao Zhao, Shashi Narayan

Specifically, We treat sentences as basic units of matching instead of tokens, and use a sentence matching function to soft-match candidate and reference sentences.

Text Generation

Conditional Generation with a Question-Answering Blueprint

1 code implementation1 Jul 2022 Shashi Narayan, Joshua Maynez, Reinald Kim Amplayo, Kuzman Ganchev, Annie Louis, Fantine Huot, Anders Sandholm, Dipanjan Das, Mirella Lapata

The ability to convey relevant and faithful information is critical for many tasks in conditional generation and yet remains elusive for neural seq-to-seq models whose outputs often reveal hallucinations and fail to correctly cover important details.

Question Answering Question Generation +1

A Well-Composed Text is Half Done! Composition Sampling for Diverse Conditional Generation

1 code implementation ACL 2022 Shashi Narayan, Gonçalo Simões, Yao Zhao, Joshua Maynez, Dipanjan Das, Michael Collins, Mirella Lapata

We propose Composition Sampling, a simple but effective method to generate diverse outputs for conditional generation of higher quality compared to previous stochastic decoding strategies.

Question Generation Question-Generation

MiRANews: Dataset and Benchmarks for Multi-Resource-Assisted News Summarization

1 code implementation Findings (EMNLP) 2021 Xinnuo Xu, Ondřej Dušek, Shashi Narayan, Verena Rieser, Ioannis Konstas

We show via data analysis that it's not only the models which are to blame: more than 27% of facts mentioned in the gold summaries of MiRANews are better grounded on assisting documents than in the main source articles.

Document Summarization Multi-Document Summarization +2

Planning with Learned Entity Prompts for Abstractive Summarization

no code implementations15 Apr 2021 Shashi Narayan, Yao Zhao, Joshua Maynez, Gonçalo Simoes, Vitaly Nikolaev, Ryan Mcdonald

Moreover, we demonstrate empirically that planning with entity chains provides a mechanism to control hallucinations in abstractive summaries.

Abstractive Text Summarization Specificity +1

On Faithfulness and Factuality in Abstractive Summarization

2 code implementations ACL 2020 Joshua Maynez, Shashi Narayan, Bernd Bohnet, Ryan Mcdonald

It is well known that the standard likelihood training and approximate decoding objectives in neural text generation models lead to less human-like responses for open-ended tasks such as language modeling and story generation.

Abstractive Text Summarization Document Summarization +3

QURIOUS: Question Generation Pretraining for Text Generation

no code implementations23 Apr 2020 Shashi Narayan, Gonçalo Simoes, Ji Ma, Hannah Craighead, Ryan Mcdonald

Recent trends in natural language processing using pretraining have shifted focus towards pretraining and fine-tuning approaches for text generation.

Abstractive Text Summarization Language Modelling +3

Sticking to the Facts: Confident Decoding for Faithful Data-to-Text Generation

no code implementations19 Oct 2019 Ran Tian, Shashi Narayan, Thibault Sellam, Ankur P. Parikh

We address the issue of hallucination in data-to-text generation, i. e., reducing the generation of text that is unsupported by the source.

Data-to-Text Generation

What is this Article about? Extreme Summarization with Topic-aware Convolutional Neural Networks

1 code implementation19 Jul 2019 Shashi Narayan, Shay B. Cohen, Mirella Lapata

We introduce 'extreme summarization', a new single-document summarization task which aims at creating a short, one-sentence news summary answering the question ``What is the article about?''.

Document Summarization Extreme Summarization

HighRES: Highlight-based Reference-less Evaluation of Summarization

1 code implementation ACL 2019 Hardy, Shashi Narayan, Andreas Vlachos

There has been substantial progress in summarization research enabled by the availability of novel, often large-scale, datasets and recent advances on neural network-based approaches.

Privacy-preserving Neural Representations of Text

1 code implementation EMNLP 2018 Maximin Coavoux, Shashi Narayan, Shay B. Cohen

This article deals with adversarial attacks towards deep learning systems for Natural Language Processing (NLP), in the context of privacy protection.

Privacy Preserving

Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization

3 code implementations EMNLP 2018 Shashi Narayan, Shay B. Cohen, Mirella Lapata

We introduce extreme summarization, a new single-document summarization task which does not favor extractive strategies and calls for an abstractive modeling approach.

Document Summarization Extreme Summarization

Deep Learning Approaches to Text Production

no code implementations NAACL 2018 Claire Gardent, Shashi Narayan

Each text production task raises a slightly different communication goal (e. g, how to take the dialogue context into account when producing a dialogue turn; how to detect and merge relevant information when summarising a text; or how to produce a well-formed text that correctly capture the information contained in some input data in the case of data-to-text generation).

Data-to-Text Generation Machine Translation +3

Ranking Sentences for Extractive Summarization with Reinforcement Learning

1 code implementation NAACL 2018 Shashi Narayan, Shay B. Cohen, Mirella Lapata

In this paper we conceptualize extractive summarization as a sentence ranking task and propose a novel training algorithm which globally optimizes the ROUGE evaluation metric through a reinforcement learning objective.

Document Summarization Extractive Summarization +3

Split and Rephrase

2 code implementations EMNLP 2017 Shashi Narayan, Claire Gardent, Shay B. Cohen, Anastasia Shimorina

We propose a new sentence simplification task (Split-and-Rephrase) where the aim is to split a complex sentence into a meaning preserving sequence of shorter sentences.

Machine Translation Split and Rephrase +1

Creating Training Corpora for NLG Micro-Planners

no code implementations ACL 2017 Claire Gardent, Anastasia Shimorina, Shashi Narayan, Laura Perez-Beltrachini

In this paper, we present a novel framework for semi-automatically creating linguistically challenging micro-planning data-to-text corpora from existing Knowledge Bases.

Data-to-Text Generation Referring Expression +2

Neural Extractive Summarization with Side Information

1 code implementation14 Apr 2017 Shashi Narayan, Nikos Papasarantopoulos, Shay B. Cohen, Mirella Lapata

Most extractive summarization methods focus on the main body of the document from which sentences need to be extracted.

Document Summarization Extractive Summarization +2

Optimizing Spectral Learning for Parsing

no code implementations ACL 2016 Shashi Narayan, Shay B. Cohen

We describe a search algorithm for optimizing the number of latent states when estimating latent-variable PCFGs with spectral methods.

Paraphrase Generation from Latent-Variable PCFGs for Semantic Parsing

no code implementations WS 2016 Shashi Narayan, Siva Reddy, Shay B. Cohen

One of the limitations of semantic parsing approaches to open-domain question answering is the lexicosyntactic gap between natural language questions and knowledge base entries -- there are many ways to ask a question, all with the same answer.

Open-Domain Question Answering Paraphrase Generation +1

Encoding Prior Knowledge with Eigenword Embeddings

no code implementations TACL 2016 Dominique Osborne, Shashi Narayan, Shay B. Cohen

Canonical correlation analysis (CCA) is a method for reducing the dimension of data represented using two views.

Test Word Embeddings

Unsupervised Sentence Simplification Using Deep Semantics

1 code implementation WS 2016 Shashi Narayan, Claire Gardent

We present a novel approach to sentence simplification which departs from previous work in two main ways.

Text Simplification

Diversity in Spectral Learning for Natural Language Parsing

no code implementations EMNLP 2015 Shashi Narayan, Shay B. Cohen

We describe an approach to create a diverse set of predictions with spectral learning of latent-variable PCFGs (L-PCFGs).

Cannot find the paper you are looking for? You can Submit a new open access paper.