Evidence-Aware Inferential Text Generation with Vector Quantised Variational AutoEncoder

Generating inferential texts about an event in different perspectives requires reasoning over different contexts that the event occurs. Existing works usually ignore the context that is not explicitly provided, resulting in a context-independent semantic representation that struggles to support the generation. To address this, we propose an approach that automatically finds evidence for an event from a large text corpus, and leverages the evidence to guide the generation of inferential texts. Our approach works in an encoder-decoder manner and is equipped with a Vector Quantised-Variational Autoencoder, where the encoder outputs representations from a distribution over discrete variables. Such discrete representations enable automatically selecting relevant evidence, which not only facilitates evidence-aware generation, but also provides a natural way to uncover rationales behind the generation. Our approach provides state-of-the-art performance on both Event2Mind and ATOMIC datasets. More importantly, we find that with discrete representations, our model selectively uses evidence to generate different inferential texts.

PDF Abstract ACL 2020 PDF ACL 2020 Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Common Sense Reasoning Event2Mind test EA-VQ-VAE BLEU 11.31 # 1
Common Sense Reasoning Event2Mind test COMET* BLEU 10.37 # 2
Common Sense Reasoning Event2Mind test S2S* BLEU 9.43 # 3
Common Sense Reasoning Event2Mind test CWVAE BLEU 8.53 # 4
Common Sense Reasoning Event2Mind test VRNMT BLEU 4.03 # 5

Methods


No methods listed for this paper. Add relevant methods here