Does Summary Evaluation Survive Translation to Other Languages?

The creation of a quality summarization dataset is an expensive, time-consuming effort, requiring the production and evaluation of summaries by both trained humans and machines. If such effort is made in one language, it would be beneficial to be able to use it in other languages without repeating human annotations. To investigate how much we can trust machine translation of such a dataset, we translate the English dataset SummEval to seven languages and compare performance across automatic evaluation measures. We explore equivalence testing as the appropriate statistical paradigm for evaluating correlations between human and automated scoring of summaries. While we find some potential for dataset reuse in languages similar to the source, most summary evaluation methods are not found to be statistically equivalent across translations.

PDF Abstract NAACL 2022 PDF NAACL 2022 Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here