Search Results for author: Sheila Castilho

Found 25 papers, 3 papers with code

On the Same Page? Comparing Inter-Annotator Agreement in Sentence and Document Level Human Machine Translation Evaluation

no code implementations WMT (EMNLP) 2020 Sheila Castilho

Document-level evaluation of machine translation has raised interest in the community especially since responses to the claims of “human parity” (Toral et al., 2018; Läubli et al., 2018) with document-level human evaluations have been published.

Machine Translation Sentence +1

MT-Pese: Machine Translation and Post-Editese

no code implementations EAMT 2022 Sheila Castilho, Natália Resende

This paper introduces the MT-Pese project, which aims at researching the post-editese phenomena in machine translated texts.

Machine Translation Translation

TransCasm: A Bilingual Corpus of Sarcastic Tweets

no code implementations PoliticalNLP (LREC) 2022 Desline Simon, Sheila Castilho, Pintu Lohar, Haithem Afli

Sarcasm is extensively used in User Generated Content (UGC) in order to express one’s discontent, especially through blogs, forums, or social media such as Twitter.

How Much Context Span is Enough? Examining Context-Related Issues for Document-level MT

no code implementations LREC 2022 Sheila Castilho

This paper analyses how much context span is necessary to solve different context-related issues, namely, reference, ellipsis, gender, number, lexical ambiguity, and terminology when translating from English into Portuguese.

World Knowledge

DELA Corpus - A Document-Level Corpus Annotated with Context-Related Issues

1 code implementation WMT (EMNLP) 2021 Sheila Castilho, João Lucas Cavalheiro Camargo, Miguel Menezes, Andy Way

Recently, the Machine Translation (MT) community has become more interested in document-level evaluation especially in light of reactions to claims of “human parity”, since examining the quality at the level of the document rather than at the sentence level allows for the assessment of suprasentential context, providing a more reliable evaluation.

Machine Translation Sentence +1

A human evaluation of English-Irish statistical and neural machine translation

no code implementations EAMT 2020 Meghan Dowling, Sheila Castilho, Joss Moorkens, Teresa Lynn, Andy Way

With official status in both Ireland and the EU, there is a need for high-quality English-Irish (EN-GA) machine translation (MT) systems which are suitable for use in a professional translation environment.

Machine Translation Translation

Document-Level Machine Translation Evaluation Project: Methodology, Effort and Inter-Annotator Agreement

no code implementations EAMT 2020 Sheila Castilho

Document-level (doc-level) human eval-uation of machine translation (MT) has raised interest in the community after a fewattempts have disproved claims of “human parity” (Toral et al., 2018; Laubli et al., 2018).

Document Level Machine Translation Machine Translation +2

On Context Span Needed for Machine Translation Evaluation

no code implementations LREC 2020 Sheila Castilho, Maja Popovi{\'c}, Andy Way

Despite increasing efforts to improve evaluation of machine translation (MT) by going beyond the sentence level to the document level, the definition of what exactly constitutes a {``}document level{''} is still not clear.

Machine Translation Sentence +1

A Set of Recommendations for Assessing Human-Machine Parity in Language Translation

1 code implementation3 Apr 2020 Samuel Läubli, Sheila Castilho, Graham Neubig, Rico Sennrich, Qinlan Shen, Antonio Toral

The quality of machine translation has increased remarkably over the past years, to the degree that it was found to be indistinguishable from professional human translation in a number of empirical investigations.

Machine Translation Translation

What Influences the Features of Post-editese? A Preliminary Study

no code implementations RANLP 2019 Sheila Castilho, Nat{\'a}lia Resende, Ruslan Mitkov

While a number of studies have shown evidence of translationese phenomena, that is, statistical differences between original texts and translated texts (Gellerstam, 1986), results of studies searching for translationese features in postedited texts (what has been called {''}posteditese{''} (Daems et al., 2017)) have presented mixed results.

Translation

Are ambiguous conjunctions problematic for machine translation?

no code implementations RANLP 2019 Maja Popovi{\'c}, Sheila Castilho

In total, we evaluate the conjunction {``}but{''} on 20 translation outputs, and the conjunction {``}and{''} on 10.

Machine Translation Translation

Attaining the Unattainable? Reassessing Claims of Human Parity in Neural Machine Translation

1 code implementation WS 2018 Antonio Toral, Sheila Castilho, Ke Hu, Andy Way

We reassess a recent study (Hassan et al., 2018) that claimed that machine translation (MT) has reached human parity for the translation of news from Chinese into English, using pairwise ranking and considering three variables that were not taken into account in that previous study: the language in which the source side of the test set was originally written, the translation proficiency of the evaluators, and the provision of inter-sentential context.

Machine Translation Translation

Using Images to Improve Machine-Translating E-Commerce Product Listings.

no code implementations EACL 2017 Iacer Calixto, Daniel Stein, Evgeny Matusov, Pintu Lohar, Sheila Castilho, Andy Way

We evaluate our models quantitatively using BLEU and TER and find that (i) additional synthetic data has a general positive impact on text-only and multi-modal NMT models, and that (ii) using a multi-modal NMT model for re-ranking n-best lists improves TER significantly across different n-best list sizes.

Machine Translation NMT +2

Human Evaluation of Multi-modal Neural Machine Translation: A Case-Study on E-Commerce Listing Titles

no code implementations WS 2017 Iacer Calixto, Daniel Stein, Evgeny Matusov, Sheila Castilho, Andy Way

Nonetheless, human evaluators ranked translations from a multi-modal NMT model as better than those of a text-only NMT over 88{\%} of the time, which suggests that images do help NMT in this use-case.

Machine Translation NMT +1

Evaluating the Impact of Light Post-Editing on Usability

no code implementations LREC 2016 Sheila Castilho, Sharon O{'}Brien

This paper discusses a methodology to measure the usability of machine translated content by end users, comparing lightly post-edited content with raw output and with the usability of source language content.

PET: a Tool for Post-editing and Assessing Machine Translation

no code implementations LREC 2012 Wilker Aziz, Sheila Castilho, Lucia Specia

Given the significant improvements in Machine Translation (MT) quality and the increasing demand for translations, post-editing of automatic translations is becoming a popular practice in the translation industry.

Machine Translation Sentence +1

Cannot find the paper you are looking for? You can Submit a new open access paper.