no code implementations • LChange (ACL) 2022 • Marije Timmermans, Eva Vanmassenhove, Dimitar Shterionov
From the analysis, it appears that “natie”, “volk” and “vaderlan”’ became more nationalistically-loaded over time.
no code implementations • MTSummit 2021 • Arbnor Rama, Eva Vanmassenhove
Even when the available parallel data (NL↔SQ) was added, i. e. in a few-shot setting (FST), it remained the worst performing system according to the automatic (BLEU and TER) and human evaluation.
no code implementations • EMNLP 2021 • Eva Vanmassenhove, Chris Emmery, Dimitar Shterionov
Recent years have seen an increasing need for gender-neutral and inclusive language.
no code implementations • 18 Jan 2024 • Eva Vanmassenhove
This chapter examines the role of Machine Translation in perpetuating gender bias, highlighting the challenges posed by cross-linguistic settings and statistical dependencies.
1 code implementation • 18 Apr 2023 • Javad PourMostafa Roshan Sharami, Dimitar Shterionov, Frédéric Blain, Eva Vanmassenhove, Mirella De Sisto, Chris Emmery, Pieter Spronck
While quality estimation (QE) can play an important role in the translation process, its effectiveness relies on the availability and quality of training data.
1 code implementation • 4 Feb 2022 • Dimitar Sherionov, Eva Vanmassenhove
This chapter focuses on the ecological footprint of neural MT systems.
no code implementations • 13 Sep 2021 • Eva Vanmassenhove, Chris Emmery, Dimitar Shterionov
Recent years have seen an increasing need for gender-neutral and inclusive language.
no code implementations • ACL (GeBNLP) 2021 • Eva Vanmassenhove, Johanna Monti
Languages differ in terms of the absence or presence of gender features, the number of gender classes and whether and where gender features are explicitly marked.
1 code implementation • ACL (GeBNLP) 2021 • Nishtha Jain, Maja Popovic, Declan Groves, Eva Vanmassenhove
The method can be applied both for creating gender balanced outputs as well as for creating gender balanced training data.
no code implementations • EACL 2021 • Eva Vanmassenhove, Dimitar Shterionov, Matthew Gwilliam
Recent studies in the field of Machine Translation (MT) and Natural Language Processing (NLP) have shown that existing models amplify biases observed in the training data.
no code implementations • 31 Mar 2020 • Eva Vanmassenhove
Establishing the discrepancies between the strengths of statistical approaches to MT and the way humans translate has been the starting point of our research.
no code implementations • EMNLP 2018 • Eva Vanmassenhove, Christian Hardmeier, Andy Way
Our contribution is two-fold: (1) the compilation of large datasets with speaker information for 20 language pairs, and (2) a simple set of experiments that incorporate gender information into NMT for multiple language pairs.
no code implementations • WS 2019 • Eva Vanmassenhove, Dimitar Shterionov, Andy Way
This work presents an empirical approach to quantifying the loss of lexical richness in Machine Translation (MT) systems compared to Human Translation (HT).
no code implementations • 23 Feb 2019 • Eva Vanmassenhove, Amit Moryossef, Alberto Poncelas, Andy Way, Dimitar Shterionov
In contradiction with the results described in previous comparable shared tasks, our neural models performed better than our best traditional approaches with our best feature set-up.
no code implementations • ACL 2018 • Eva Vanmassenhove, Andy Way
In this paper we incorporate semantic supersensetags and syntactic supertag features into EN{--}FR and EN{--}DE factored NMT systems.