Search Results for author: Amr Hendy

Found 7 papers, 2 papers with code

Language Tokens: Simply Improving Zero-Shot Multi-Aligned Translation in Encoder-Decoder Models

no code implementations AMTA 2022 Muhammad N ElNokrashy, Amr Hendy, Mohamed Maher, Mohamed Afify, Hany Hassan

In a WMT-based setting, we see 1. 3 and 0. 4 BLEU points improvement for the zero-shot setting, and when using direct data for training, respectively, while from-English performance improves by 4. 17 and 0. 85 BLEU points.

Translation

How Good Are GPT Models at Machine Translation? A Comprehensive Evaluation

1 code implementation18 Feb 2023 Amr Hendy, Mohamed Abdelrehim, Amr Sharaf, Vikas Raunak, Mohamed Gabr, Hitokazu Matsushita, Young Jin Kim, Mohamed Afify, Hany Hassan Awadalla

In this paper, we present a comprehensive evaluation of GPT models for machine translation, covering various aspects such as quality of different GPT models in comparison with state-of-the-art research and commercial systems, effect of prompting strategies, robustness towards domain shifts and document-level translation.

Machine Translation Text Generation +1

Language Tokens: A Frustratingly Simple Approach Improves Zero-Shot Performance of Multilingual Translation

no code implementations11 Aug 2022 Muhammad ElNokrashy, Amr Hendy, Mohamed Maher, Mohamed Afify, Hany Hassan Awadalla

In a WMT evaluation campaign, From-English performance improves by 4. 17 and 2. 87 BLEU points, in the zero-shot setting, and when direct data is available for training, respectively.

Translation

Scalable and Efficient MoE Training for Multitask Multilingual Models

1 code implementation22 Sep 2021 Young Jin Kim, Ammar Ahmad Awan, Alexandre Muzio, Andres Felipe Cruz Salinas, Liyang Lu, Amr Hendy, Samyam Rajbhandari, Yuxiong He, Hany Hassan Awadalla

By combining the efficient system and training methods, we are able to significantly scale up large multitask multilingual models for language generation which results in a great improvement in model accuracy.

Machine Translation Text Generation

Score Combination for Improved Parallel Corpus Filtering for Low Resource Conditions

no code implementations WMT (EMNLP) 2020 Muhammad N. ElNokrashy, Amr Hendy, Mohamed Abdelghaffar, Mohamed Afify, Ahmed Tawfik, Hany Hassan Awadalla

For the mBART finetuning setup, provided by the organizers, our method shows 7% and 5% relative improvement over baseline, in sacreBLEU score on the test set for Pashto and Khmer respectively.

Sentence

Cannot find the paper you are looking for? You can Submit a new open access paper.