Search Results for author: Sweta Agrawal

Found 22 papers, 10 papers with code

Controlling Text Complexity in Neural Machine Translation

1 code implementation IJCNLP 2019 Sweta Agrawal, Marine Carpuat

This work introduces a machine translation task where the output is aimed at audiences of different levels of target language proficiency.

Machine Translation Translation

Generating Diverse Translations via Weighted Fine-tuning and Hypotheses Filtering for the Duolingo STAPLE Task

no code implementations WS 2020 Sweta Agrawal, Marine Carpuat

This paper describes the University of Maryland{'}s submission to the Duolingo Shared Task on Simultaneous Translation And Paraphrase for Language Education (STAPLE).

Machine Translation Translation

Multitask Models for Controlling the Complexity of Neural Machine Translation

no code implementations WS 2020 Sweta Agrawal, Marine Carpuat

We introduce a machine translation task where the output is aimed at audiences of different levels of target language proficiency.

Machine Translation Translation

Assessing Reference-Free Peer Evaluation for Machine Translation

no code implementations NAACL 2021 Sweta Agrawal, George Foster, Markus Freitag, Colin Cherry

Reference-free evaluation has the potential to make machine translation evaluation substantially more scalable, allowing us to pivot easily to new languages or domains.

Machine Translation Translation

A Review of Human Evaluation for Style Transfer

1 code implementation ACL (GEM) 2021 Eleftheria Briakou, Sweta Agrawal, Ke Zhang, Joel Tetreault, Marine Carpuat

However, in style transfer papers, we find that protocols for human evaluations are often underspecified and not standardized, which hampers the reproducibility of research in this field and progress toward better human and automatic evaluation methods.

Style Transfer

Can Multilinguality benefit Non-autoregressive Machine Translation?

no code implementations16 Dec 2021 Sweta Agrawal, Julia Kreutzer, Colin Cherry

Non-autoregressive (NAR) machine translation has recently achieved significant improvements, and now outperforms autoregressive (AR) models on some benchmarks, providing an efficient alternative to AR inference.

Machine Translation Translation

The two body problem: proprioception and motor control across the metamorphic divide

no code implementations10 Jan 2022 Sweta Agrawal, John C Tuthill

Like a rocket being propelled into space, evolution has engineered flies to launch into adulthood via multiple stages.

An Imitation Learning Curriculum for Text Editing with Non-Autoregressive Models

no code implementations ACL 2022 Sweta Agrawal, Marine Carpuat

We propose a framework for training non-autoregressive sequence-to-sequence models for editing tasks, where the original input sequence is iteratively edited to produce the output.

Abstractive Text Summarization Imitation Learning +3

Controlling Translation Formality Using Pre-trained Multilingual Language Models

no code implementations IWSLT (ACL) 2022 Elijah Rippeth, Sweta Agrawal, Marine Carpuat

This paper describes the University of Maryland's submission to the Special Task on Formality Control for Spoken Language Translation at \iwslt, which evaluates translation from English into 6 languages with diverse grammatical formality markers.

Language Modelling Translation

In-context Examples Selection for Machine Translation

no code implementations5 Dec 2022 Sweta Agrawal, Chunting Zhou, Mike Lewis, Luke Zettlemoyer, Marjan Ghazvininejad

Large-scale generative models show an impressive ability to perform a wide range of Natural Language Processing (NLP) tasks using in-context learning, where a few examples are used to describe a task to the model.

In-Context Learning Language Modelling +2

Controlling Pre-trained Language Models for Grade-Specific Text Simplification

no code implementations24 May 2023 Sweta Agrawal, Marine Carpuat

Based on these insights, we introduce a simple method that predicts the edit operations required for simplifying a text for a specific grade level on an instance-per-instance basis.

Text Simplification

Tower: An Open Multilingual Large Language Model for Translation-Related Tasks

1 code implementation27 Feb 2024 Duarte M. Alves, José Pombal, Nuno M. Guerreiro, Pedro H. Martins, João Alves, Amin Farajian, Ben Peters, Ricardo Rei, Patrick Fernandes, Sweta Agrawal, Pierre Colombo, José G. C. de Souza, André F. T. Martins

While general-purpose large language models (LLMs) demonstrate proficiency on multiple tasks within the domain of translation, approaches based on open LLMs are competitive only when specializing on a single task.

Language Modelling Large Language Model +1

Is Context Helpful for Chat Translation Evaluation?

no code implementations13 Mar 2024 Sweta Agrawal, Amin Farajian, Patrick Fernandes, Ricardo Rei, André F. T. Martins

Our findings show that augmenting neural learned metrics with contextual information helps improve correlation with human judgments in the reference-free scenario and when evaluating translations in out-of-English settings.

Language Modelling Large Language Model +2

Cannot find the paper you are looking for? You can Submit a new open access paper.