Search Results for author: Amr Sharaf

Found 18 papers, 6 papers with code

Contrastive Preference Optimization: Pushing the Boundaries of LLM Performance in Machine Translation

1 code implementation16 Jan 2024 Haoran Xu, Amr Sharaf, Yunmo Chen, Weiting Tan, Lingfeng Shen, Benjamin Van Durme, Kenton Murray, Young Jin Kim

However, even the top-performing 13B LLM-based translation models, like ALMA, does not match the performance of state-of-the-art conventional encoder-decoder translation models or larger-scale LLMs such as GPT-4.

Machine Translation Translation

A Paradigm Shift in Machine Translation: Boosting Translation Performance of Large Language Models

1 code implementation20 Sep 2023 Haoran Xu, Young Jin Kim, Amr Sharaf, Hany Hassan Awadalla

In this study, we propose a novel fine-tuning approach for LLMs that is specifically designed for the translation task, eliminating the need for the abundant parallel data that traditional translation models usually depend on.

Language Modelling Machine Translation +1

Leveraging GPT-4 for Automatic Translation Post-Editing

no code implementations24 May 2023 Vikas Raunak, Amr Sharaf, Yiren Wang, Hany Hassan Awadallah, Arul Menezes

In this work, we formalize the task of direct translation post-editing with Large Language Models (LLMs) and explore the use of GPT-4 to automatically post-edit NMT outputs across several language pairs.

Machine Translation NMT +1

How Good Are GPT Models at Machine Translation? A Comprehensive Evaluation

1 code implementation18 Feb 2023 Amr Hendy, Mohamed Abdelrehim, Amr Sharaf, Vikas Raunak, Mohamed Gabr, Hitokazu Matsushita, Young Jin Kim, Mohamed Afify, Hany Hassan Awadalla

In this paper, we present a comprehensive evaluation of GPT models for machine translation, covering various aspects such as quality of different GPT models in comparison with state-of-the-art research and commercial systems, effect of prompting strategies, robustness towards domain shifts and document-level translation.

Machine Translation Text Generation +1

On Hard Episodes in Meta-Learning

no code implementations21 Oct 2021 Samyadeep Basu, Amr Sharaf, Nicolo Fusi, Soheil Feizi

To address the issue of sub-par performance on hard episodes, we investigate and benchmark different meta-training strategies based on adversarial training and curriculum learning.


Data Augmentation for Meta-Learning

1 code implementation14 Oct 2020 Renkun Ni, Micah Goldblum, Amr Sharaf, Kezhi Kong, Tom Goldstein

Conventional image classifiers are trained by randomly sampling mini-batches of images.

Data Augmentation Meta-Learning

Active Imitation Learning with Noisy Guidance

1 code implementation ACL 2020 Kianté Brantley, Amr Sharaf, Hal Daumé III

Imitation learning algorithms provide state-of-the-art results on many structured prediction tasks by learning near-optimal search policies.

Active Learning Imitation Learning +1

Meta-Learning for Few-Shot NMT Adaptation

no code implementations WS 2020 Amr Sharaf, Hany Hassan, Hal Daumé III

We frame the adaptation of NMT systems as a meta-learning problem, where we learn to adapt to new unseen domains based on simulated offline meta-training domain adaptation tasks.

Domain Adaptation Machine Translation +3

Learning Effective Exploration Strategies For Contextual Bandits

no code implementations25 Sep 2019 Amr Sharaf, Hal Daumé III

We develop a meta-learning algorithm, MELEE, that learns an exploration policy based on simulated, synthetic contextual bandit tasks.

Imitation Learning Learning-To-Rank +2

Meta-Learning for Contextual Bandit Exploration

no code implementations ICLR 2019 Amr Sharaf, Hal Daumé III

We describe MELEE, a meta-learning algorithm for learning a good exploration policy in the interactive contextual bandit setting.

Imitation Learning Meta-Learning +1

Cross-Lingual Approaches to Reference Resolution in Dialogue Systems

no code implementations27 Nov 2018 Amr Sharaf, Arpit Gupta, Hancheng Ge, Chetan Naik, Lambert Mathias

In the cross-lingual setup, we assume there is access to annotated resources as well as a well trained model in the source language and little to no annotated data in the target language.

Cross-Lingual Transfer Data Augmentation +4

Residual Loss Prediction: Reinforcement Learning With No Incremental Feedback

1 code implementation ICLR 2018 Hal Daumé III, John Langford, Amr Sharaf

We consider reinforcement learning and bandit structured prediction problems with very sparse loss feedback: only at the end of an episode.

Multi-Armed Bandits reinforcement-learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.