Style Transfer for Texts: Retrain, Report Errors, Compare with Rewrites

This paper shows that standard assessment methodology for style transfer has several significant problems. First, the standard metrics for style accuracy and semantics preservation vary significantly on different re-runs. Therefore one has to report error margins for the obtained results. Second, starting with certain values of bilingual evaluation understudy (BLEU) between input and output and accuracy of the sentiment transfer the optimization of these two standard metrics diverge from the intuitive goal of the style transfer task. Finally, due to the nature of the task itself, there is a specific dependence between these two metrics that could be easily manipulated. Under these circumstances, we suggest taking BLEU between input and human-written reformulations into consideration for benchmarks. We also propose three new architectures that outperform state of the art in terms of this metric.

PDF Abstract IJCNLP 2019 PDF IJCNLP 2019 Abstract


  Add Datasets introduced or used in this paper
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Text Style Transfer Yelp Review Dataset (Small) SAE+Discriminator G-Score (BLEU, Accuracy) 74.56 # 1


No methods listed for this paper. Add relevant methods here