Assessing the Quality of MT Systems for Hindi to English Translation

15 Apr 2014  ·  Aditi Kalyani, Hemant Kumud, Shashi Pal Singh, Ajai Kumar ·

Evaluation plays a vital role in checking the quality of MT output. It is done either manually or automatically... Manual evaluation is very time consuming and subjective, hence use of automatic metrics is done most of the times. This paper evaluates the translation quality of different MT Engines for Hindi-English (Hindi data is provided as input and English is obtained as output) using various automatic metrics like BLEU, METEOR etc. Further the comparison automatic evaluation results with Human ranking have also been given. read more

PDF Abstract
No code implementations yet. Submit your code now


  Add Datasets introduced or used in this paper

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.


No methods listed for this paper. Add relevant methods here