Automatic Reference-Based Evaluation of Pronoun Translation Misses the Point
We compare the performance of the APT and AutoPRF metrics for pronoun translation against a manually annotated dataset comprising human judgements as to the correctness of translations of the PROTEST test suite. Although there is some correlation with the human judgements, a range of issues limit the performance of the automated metrics. Instead, we recommend the use of semi-automatic metrics and test suites in place of fully automatic metrics.PDF Abstract EMNLP 2018 PDF EMNLP 2018 Abstract
No code implementations yet. Submit your code now
Results from the Paper
Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.
No methods listed for this paper. Add relevant methods here