Uncertainty-Aware Abstractive Summarization

21 May 2021  ·  Alexios Gidiotis, Grigorios Tsoumakas ·

We propose a novel approach to summarization based on Bayesian deep learning. We approximate Bayesian summary generation by first extending state-of-the-art summarization models with Monte Carlo dropout and then using them to perform multiple stochastic forward passes... This method allows us to improve summarization performance by simply using the median of multiple stochastic summaries. We show that our variational equivalents of BART and PEGASUS can outperform their deterministic counterparts on multiple benchmark datasets. In addition, we rely on Bayesian inference to measure the uncertainty of the model when generating summaries. Having a reliable uncertainty measure, we can improve the experience of the end user by filtering out generated summaries of high uncertainty. Furthermore, our proposed metric could be used as a criterion for selecting samples for annotation, and can be paired nicely with active learning and human-in-the-loop approaches. read more

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods