Generating Diverse and Consistent QA pairs from Contexts with Information-Maximizing Hierarchical Conditional VAEs

One of the most crucial challenges in question answering (QA) is the scarcity of labeled data, since it is costly to obtain question-answer (QA) pairs for a target text domain with human annotation. An alternative approach to tackle the problem is to use automatically generated QA pairs from either the problem context or from large amount of unstructured texts (e.g. Wikipedia). In this work, we propose a hierarchical conditional variational autoencoder (HCVAE) for generating QA pairs given unstructured texts as contexts, while maximizing the mutual information between generated QA pairs to ensure their consistency. We validate our Information Maximizing Hierarchical Conditional Variational AutoEncoder (Info-HCVAE) on several benchmark datasets by evaluating the performance of the QA model (BERT-base) using only the generated QA pairs (QA-based evaluation) or by using both the generated and human-labeled pairs (semi-supervised learning) for training, against state-of-the-art baseline models. The results show that our model obtains impressive performance gains over all baselines on both tasks, using only a fraction of data for training.

PDF Abstract ACL 2020 PDF ACL 2020 Abstract

Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Question Generation Natural Questions Info-HCVAE QAE 37.18 # 1
R-QAE 29.39 # 2
Question Generation Natural Questions HCVAE QAE 31.45 # 2
R-QAE 32.78 # 1
Question Generation SQuAD Info-HCVAE QAE 71.18 # 1
R-QAE 38.8 # 1
Question Generation SQuAD HCVAE QAE 69.46 # 2
R-QAE 37.57 # 2
Question Generation TriviaQA Info-HCVAE QAE 35.45 # 1
R-QAE 21.65 # 2
Question Generation TriviaQA HCVAE QAE 30.2 # 2
R-QAE 34.41 # 1