Search Results for author: Peter J. Liu

Found 16 papers, 10 papers with code

SEAL: Segment-wise Extractive-Abstractive Long-form Text Summarization

no code implementations18 Jun 2020 Yao Zhao, Mohammad Saleh, Peter J. Liu

Most prior work in the sequence-to-sequence paradigm focused on datasets with input sequence lengths in the hundreds of tokens due to the computational constraints of common RNN and Transformer architectures.

Abstractive Text Summarization

PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization

12 code implementations ICML 2020 Jingqing Zhang, Yao Zhao, Mohammad Saleh, Peter J. Liu

Recent work pre-training Transformers with self-supervised objectives on large text corpora has shown great success when fine-tuned on downstream NLP tasks including text summarization.

Abstractive Text Summarization

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

20 code implementations arXiv 2019 Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu

Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP).

Common Sense Reasoning Question Answering +3

SummAE: Zero-Shot Abstractive Text Summarization using Length-Agnostic Auto-Encoders

2 code implementations2 Oct 2019 Peter J. Liu, Yu-An Chung, Jie Ren

We show results for extractive and human baselines to demonstrate a large abstractive gap in performance.

Abstractive Text Summarization Denoising

Likelihood Ratios for Out-of-Distribution Detection

3 code implementations NeurIPS 2019 Jie Ren, Peter J. Liu, Emily Fertig, Jasper Snoek, Ryan Poplin, Mark A. DePristo, Joshua V. Dillon, Balaji Lakshminarayanan

We propose a likelihood ratio method for deep generative models which effectively corrects for these confounding background statistics.

Out-of-Distribution Detection

Assessing The Factual Accuracy of Generated Text

1 code implementation30 May 2019 Ben Goodrich, Vinay Rao, Mohammad Saleh, Peter J. Liu

We propose a model-based metric to estimate the factual accuracy of generated text that is complementary to typical scoring schemes like ROUGE (Recall-Oriented Understudy for Gisting Evaluation) and BLEU (Bilingual Evaluation Understudy).

Text Summarization

Using Ontologies To Improve Performance In Massively Multi-label Prediction Models

no code implementations28 May 2019 Ethan Steinberg, Peter J. Liu

Massively multi-label prediction/classification problems arise in environments like health-care or biology where very precise predictions are useful.

Disease Prediction General Classification +1

Using Ontologies To Improve Performance In Massively Multi-label Prediction

no code implementations ICLR 2019 Ethan Steinberg, Peter J. Liu

Massively multi-label prediction/classification problems arise in environments like health-care or biology where it is useful to make very precise predictions.

Disease Prediction General Classification +1

MeanSum: A Neural Model for Unsupervised Multi-document Abstractive Summarization

3 code implementations12 Oct 2018 Eric Chu, Peter J. Liu

Our proposed model consists of an auto-encoder where the mean of the representations of the input reviews decodes to a reasonable summary-review while not relying on any review-specific features.

Abstractive Text Summarization

Learning to Write Notes in Electronic Health Records

no code implementations8 Aug 2018 Peter J. Liu

Clinicians spend a significant amount of time inputting free-form textual notes into Electronic Health Records (EHR) systems.

Language Modelling

Beyond Word Importance: Contextual Decomposition to Extract Interactions from LSTMs

3 code implementations ICLR 2018 W. James Murdoch, Peter J. Liu, Bin Yu

On the task of sentiment analysis with the Yelp and SST data sets, we show that CD is able to reliably identify words and phrases of contrasting sentiment, and how they are combined to yield the LSTM's final prediction.

Sentiment Analysis

Get To The Point: Summarization with Pointer-Generator Networks

39 code implementations ACL 2017 Abigail See, Peter J. Liu, Christopher D. Manning

Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text).

Abstractive Text Summarization Extractive Text Summarization

Online and Linear-Time Attention by Enforcing Monotonic Alignments

2 code implementations ICML 2017 Colin Raffel, Minh-Thang Luong, Peter J. Liu, Ron J. Weiss, Douglas Eck

Recurrent neural network models with an attention mechanism have proven to be extremely effective on a wide variety of sequence-to-sequence problems.

Machine Translation Sentence Summarization +1

Unsupervised Pretraining for Sequence to Sequence Learning

no code implementations EMNLP 2017 Prajit Ramachandran, Peter J. Liu, Quoc V. Le

We apply this method to challenging benchmarks in machine translation and abstractive summarization and find that it significantly improves the subsequent supervised models.

Abstractive Text Summarization Machine Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.