Proceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation 2019

Jointly Measuring Diversity and Quality in Text Generation Models

Proceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation 2019 Ehsan-MAE/TextGenerationEvaluationMetrics

In this paper, we propose metrics to evaluate both the quality and diversity simultaneously by approximating the distance of the learned generative model and the real data distribution.

TEXT GENERATION