Search Results for author: Peter J. Liu

Found 22 papers, 11 papers with code

Improving the Robustness of Summarization Models by Detecting and Removing Input Noise

no code implementations20 Dec 2022 Kundan Krishna, Yao Zhao, Jie Ren, Balaji Lakshminarayanan, Jiaming Luo, Mohammad Saleh, Peter J. Liu

We present a large empirical study quantifying the sometimes severe loss in performance (up to 12 ROUGE-1 points) from different types of input noise for a range of datasets and model sizes.

Abstractive Text Summarization

Calibrating Sequence likelihood Improves Conditional Language Generation

no code implementations30 Sep 2022 Yao Zhao, Misha Khalman, Rishabh Joshi, Shashi Narayan, Mohammad Saleh, Peter J. Liu

Conditional language models are predominantly trained with maximum likelihood estimation (MLE), giving probability mass to sparsely observed target sequences.

abstractive question answering Abstractive Text Summarization +4

Out-of-Distribution Detection and Selective Generation for Conditional Language Models

no code implementations30 Sep 2022 Jie Ren, Jiaming Luo, Yao Zhao, Kundan Krishna, Mohammad Saleh, Balaji Lakshminarayanan, Peter J. Liu

Furthermore, the space of potential low-quality outputs is larger as arbitrary text can be generated and it is important to know when to trust the generated output.

Abstractive Text Summarization Out-of-Distribution Detection +1

Investigating Efficiently Extending Transformers for Long Input Summarization

1 code implementation8 Aug 2022 Jason Phang, Yao Zhao, Peter J. Liu

While large pretrained Transformer models have proven highly capable at tackling natural language tasks, handling long sequence inputs continues to be a significant challenge.

 Ranked #1 on Long-range modeling on SCROLLS (GovRep metric)

Long-range modeling Text Summarization

SMART: Sentences as Basic Units for Text Evaluation

no code implementations1 Aug 2022 Reinald Kim Amplayo, Peter J. Liu, Yao Zhao, Shashi Narayan

Specifically, We treat sentences as basic units of matching instead of tokens, and use a sentence matching function to soft-match candidate and reference sentences.

Text Generation

SEAL: Segment-wise Extractive-Abstractive Long-form Text Summarization

no code implementations18 Jun 2020 Yao Zhao, Mohammad Saleh, Peter J. Liu

Most prior work in the sequence-to-sequence paradigm focused on datasets with input sequence lengths in the hundreds of tokens due to the computational constraints of common RNN and Transformer architectures.

Abstractive Text Summarization

PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization

17 code implementations ICML 2020 Jingqing Zhang, Yao Zhao, Mohammad Saleh, Peter J. Liu

Recent work pre-training Transformers with self-supervised objectives on large text corpora has shown great success when fine-tuned on downstream NLP tasks including text summarization.

Abstractive Text Summarization

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

39 code implementations arXiv 2019 Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu

Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP).

Common Sense Reasoning Linguistic Acceptability +7

SummAE: Zero-Shot Abstractive Text Summarization using Length-Agnostic Auto-Encoders

2 code implementations2 Oct 2019 Peter J. Liu, Yu-An Chung, Jie Ren

We show results for extractive and human baselines to demonstrate a large abstractive gap in performance.

Abstractive Text Summarization Denoising

Assessing The Factual Accuracy of Generated Text

2 code implementations30 May 2019 Ben Goodrich, Vinay Rao, Mohammad Saleh, Peter J. Liu

We propose a model-based metric to estimate the factual accuracy of generated text that is complementary to typical scoring schemes like ROUGE (Recall-Oriented Understudy for Gisting Evaluation) and BLEU (Bilingual Evaluation Understudy).

Text Summarization

Using Ontologies To Improve Performance In Massively Multi-label Prediction Models

no code implementations28 May 2019 Ethan Steinberg, Peter J. Liu

Massively multi-label prediction/classification problems arise in environments like health-care or biology where very precise predictions are useful.

Disease Prediction General Classification +1

Using Ontologies To Improve Performance In Massively Multi-label Prediction

no code implementations ICLR 2019 Ethan Steinberg, Peter J. Liu

Massively multi-label prediction/classification problems arise in environments like health-care or biology where it is useful to make very precise predictions.

Disease Prediction General Classification +1

MeanSum: A Neural Model for Unsupervised Multi-document Abstractive Summarization

3 code implementations12 Oct 2018 Eric Chu, Peter J. Liu

Our proposed model consists of an auto-encoder where the mean of the representations of the input reviews decodes to a reasonable summary-review while not relying on any review-specific features.

Abstractive Text Summarization

Unsupervised Neural Multi-Document Abstractive Summarization of Reviews

no code implementations27 Sep 2018 Eric Chu, Peter J. Liu

Our proposed model consists of an auto-encoder trained so that the mean of the representations of the input reviews decodes to a reasonable summary-review.

Abstractive Text Summarization

Learning to Write Notes in Electronic Health Records

no code implementations8 Aug 2018 Peter J. Liu

Clinicians spend a significant amount of time inputting free-form textual notes into Electronic Health Records (EHR) systems.

Language Modelling

Beyond Word Importance: Contextual Decomposition to Extract Interactions from LSTMs

3 code implementations ICLR 2018 W. James Murdoch, Peter J. Liu, Bin Yu

On the task of sentiment analysis with the Yelp and SST data sets, we show that CD is able to reliably identify words and phrases of contrasting sentiment, and how they are combined to yield the LSTM's final prediction.

Sentiment Analysis

Get To The Point: Summarization with Pointer-Generator Networks

40 code implementations ACL 2017 Abigail See, Peter J. Liu, Christopher D. Manning

Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text).

Abstractive Text Summarization Document Summarization +1

Online and Linear-Time Attention by Enforcing Monotonic Alignments

2 code implementations ICML 2017 Colin Raffel, Minh-Thang Luong, Peter J. Liu, Ron J. Weiss, Douglas Eck

Recurrent neural network models with an attention mechanism have proven to be extremely effective on a wide variety of sequence-to-sequence problems.

Machine Translation Sentence Summarization +3

Unsupervised Pretraining for Sequence to Sequence Learning

no code implementations EMNLP 2017 Prajit Ramachandran, Peter J. Liu, Quoc V. Le

We apply this method to challenging benchmarks in machine translation and abstractive summarization and find that it significantly improves the subsequent supervised models.

Abstractive Text Summarization Machine Translation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.