Search Results for author: Marina Fomicheva

Found 29 papers, 11 papers with code

Findings of the WMT 2020 Shared Task on Quality Estimation

no code implementations WMT (EMNLP) 2020 Lucia Specia, Frédéric Blain, Marina Fomicheva, Erick Fonseca, Vishrav Chaudhary, Francisco Guzmán, André F. T. Martins

We report the results of the WMT20 shared task on Quality Estimation, where the challenge is to predict the quality of the output of neural machine translation systems at the word, sentence and document levels.

Machine Translation Sentence +1

BERGAMOT-LATTE Submissions for the WMT20 Quality Estimation Shared Task

no code implementations WMT (EMNLP) 2020 Marina Fomicheva, Shuo Sun, Lisa Yankovskaya, Frédéric Blain, Vishrav Chaudhary, Mark Fishel, Francisco Guzmán, Lucia Specia

We explore (a) a black-box approach to QE based on pre-trained representations; and (b) glass-box approaches that leverage various indicators that can be extracted from the neural MT systems.

Sentence Task 2

Findings of the WMT 2021 Shared Task on Quality Estimation

no code implementations WMT (EMNLP) 2021 Lucia Specia, Frédéric Blain, Marina Fomicheva, Chrysoula Zerva, Zhenhao Li, Vishrav Chaudhary, André F. T. Martins

We report the results of the WMT 2021 shared task on Quality Estimation, where the challenge is to predict the quality of the output of neural machine translation systems at the word and sentence levels.

Machine Translation Sentence +1

Bayesian Model-Agnostic Meta-Learning with Matrix-Valued Kernels for Quality Estimation

no code implementations ACL (RepL4NLP) 2021 Abiola Obamuyide, Marina Fomicheva, Lucia Specia

To address these challenges, we propose a Bayesian meta-learning approach for adapting QE models to the needs and preferences of each user with limited supervision.

Machine Translation Meta-Learning +1

Bias Mitigation in Machine Translation Quality Estimation

1 code implementation ACL 2022 Hanna Behnke, Marina Fomicheva, Lucia Specia

Machine Translation Quality Estimation (QE) aims to build predictive models to assess the quality of machine-generated translations in the absence of reference translations.

Binary Classification Machine Translation +1

Towards Explainable Evaluation Metrics for Machine Translation

no code implementations22 Jun 2023 Christoph Leiter, Piyawat Lertvittayakumjorn, Marina Fomicheva, Wei Zhao, Yang Gao, Steffen Eger

In this context, we also discuss the latest state-of-the-art approaches to explainable metrics based on generative models such as ChatGPT and GPT4.

Machine Translation Translation

Reducing Hallucinations in Neural Machine Translation with Feature Attribution

no code implementations17 Nov 2022 Joël Tang, Marina Fomicheva, Lucia Specia

We present a case study focusing on model understanding and regularisation to reduce hallucinations in NMT.

Machine Translation NMT +2

Towards Explainable Evaluation Metrics for Natural Language Generation

1 code implementation21 Mar 2022 Christoph Leiter, Piyawat Lertvittayakumjorn, Marina Fomicheva, Wei Zhao, Yang Gao, Steffen Eger

We also provide a synthesizing overview over recent approaches for explainable machine translation metrics and discuss how they relate to those goals and properties.

Machine Translation Text Generation +2

Pushing the Right Buttons: Adversarial Evaluation of Quality Estimation

1 code implementation WMT (EMNLP) 2021 Diptesh Kanojia, Marina Fomicheva, Tharindu Ranasinghe, Frédéric Blain, Constantin Orăsan, Lucia Specia

However, this ability is yet to be tested in the current evaluation practices, where QE systems are assessed only in terms of their correlation with human judgements.

Machine Translation Translation

Translation Error Detection as Rationale Extraction

no code implementations Findings (ACL) 2022 Marina Fomicheva, Lucia Specia, Nikolaos Aletras

Recent Quality Estimation (QE) models based on multilingual pre-trained representations have achieved very competitive results when predicting the overall quality of translated sentences.

Sentence Translation

Continual Quality Estimation with Online Bayesian Meta-Learning

no code implementations ACL 2021 Abiola Obamuyide, Marina Fomicheva, Lucia Specia

Most current quality estimation (QE) models for machine translation are trained and evaluated in a static setting where training and test data are assumed to be from a fixed distribution.

Machine Translation Meta-Learning +1

Knowledge Distillation for Quality Estimation

1 code implementation Findings (ACL) 2021 Amit Gajbhiye, Marina Fomicheva, Fernando Alva-Manchego, Frédéric Blain, Abiola Obamuyide, Nikolaos Aletras, Lucia Specia

Quality Estimation (QE) is the task of automatically predicting Machine Translation quality in the absence of reference translations, making it applicable in real-time settings, such as translating online social media conversations.

Data Augmentation Knowledge Distillation +2

Backtranslation Feedback Improves User Confidence in MT, Not Quality

1 code implementation NAACL 2021 Vilém Zouhar, Michal Novák, Matúš Žilinec, Ondřej Bojar, Mateo Obregón, Robin L. Hill, Frédéric Blain, Marina Fomicheva, Lucia Specia, Lisa Yankovskaya

Translating text into a language unknown to the text's author, dubbed outbound translation, is a modern need for which the user experience has significant room for improvement, beyond the basic machine translation facility.

Machine Translation Translation

Exploring Supervised and Unsupervised Rewards in Machine Translation

1 code implementation EACL 2021 Julia Ive, Zixu Wang, Marina Fomicheva, Lucia Specia

Reinforcement Learning (RL) is a powerful framework to address the discrepancy between loss functions used during training and the final evaluation metrics to be used at test time.

Machine Translation Reinforcement Learning (RL) +2

An Exploratory Study on Multilingual Quality Estimation

no code implementations Asian Chapter of the Association for Computational Linguistics 2020 Shuo Sun, Marina Fomicheva, Fr{\'e}d{\'e}ric Blain, Vishrav Chaudhary, Ahmed El-Kishky, Adithya Renduchintala, Francisco Guzm{\'a}n, Lucia Specia

Predicting the quality of machine translation has traditionally been addressed with language-specific models, under the assumption that the quality label distribution or linguistic features exhibit traits that are not shared across languages.

Machine Translation Translation

Exploring Model Consensus to Generate Translation Paraphrases

1 code implementation WS 2020 Zhenhao Li, Marina Fomicheva, Lucia Specia

This paper describes our submission to the 2020 Duolingo Shared Task on Simultaneous Translation And Paraphrase for Language Education (STAPLE).

Machine Translation Translation

Unsupervised Quality Estimation for Neural Machine Translation

3 code implementations21 May 2020 Marina Fomicheva, Shuo Sun, Lisa Yankovskaya, Frédéric Blain, Francisco Guzmán, Mark Fishel, Nikolaos Aletras, Vishrav Chaudhary, Lucia Specia

Quality Estimation (QE) is an important component in making Machine Translation (MT) useful in real-world applications, as it is aimed to inform the user on the quality of the MT output at test time.

Machine Translation Translation +1

Taking MT Evaluation Metrics to Extremes: Beyond Correlation with Human Judgments

no code implementations CL 2019 Marina Fomicheva, Lucia Specia

Much work has been dedicated to the improvement of evaluation metrics to achieve a higher correlation with human judgments.

Machine Translation Translation

MAJE Submission to the WMT2018 Shared Task on Parallel Corpus Filtering

no code implementations WS 2018 Marina Fomicheva, Jes{\'u}s Gonz{\'a}lez-Rubio

This paper describes the participation of Webinterpret in the shared task on parallel corpus filtering at the Third Conference on Machine Translation (WMT 2018).

Machine Translation Translation

Using Contextual Information for Machine Translation Evaluation

no code implementations LREC 2016 Marina Fomicheva, N{\'u}ria Bel

Automatic evaluation of Machine Translation (MT) is typically approached by measuring similarity between the candidate MT and a human reference translation.

Machine Translation Sentence +1

Boosting the creation of a treebank

no code implementations LREC 2014 Blanca Arias, N{\'u}ria Bel, Merc{\`e} Lorente, Montserrat Marim{\'o}n, Alba Mil{\`a}, Jorge Vivaldi, Muntsa Padr{\'o}, Marina Fomicheva, Imanol Larrea

In this paper we present the results of an ongoing experiment of bootstrapping a Treebank for Catalan by using a Dependency Parser trained with Spanish sentences.

Dependency Parsing Machine Translation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.