Search Results for author: Viresh Ratnakar

Found 4 papers, 1 papers with code

Finding Replicable Human Evaluations via Stable Ranking Probability

no code implementations1 Apr 2024 Parker Riley, Daniel Deutsch, George Foster, Viresh Ratnakar, Ali Dabirmoghaddam, Markus Freitag

Reliable human evaluation is critical to the development of successful natural language generation models, but achieving it is notoriously difficult.

Machine Translation Text Generation

Prompting PaLM for Translation: Assessing Strategies and Performance

no code implementations16 Nov 2022 David Vilar, Markus Freitag, Colin Cherry, Jiaming Luo, Viresh Ratnakar, George Foster

Large language models (LLMs) that have been trained on multilingual but not parallel text exhibit a remarkable ability to translate between languages.

Language Modelling Machine Translation +1

Experts, Errors, and Context: A Large-Scale Study of Human Evaluation for Machine Translation

3 code implementations29 Apr 2021 Markus Freitag, George Foster, David Grangier, Viresh Ratnakar, Qijun Tan, Wolfgang Macherey

Human evaluation of modern high-quality machine translation systems is a difficult problem, and there is increasing evidence that inadequate evaluation procedures can lead to erroneous conclusions.

Machine Translation Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.