Search Results for author: Irina Sorokina

Found 6 papers, 2 papers with code

Neural Machine Translation of Clinical Text: An Empirical Investigation into Multilingual Pre-Trained Language Models and Transfer-Learning

1 code implementation12 Dec 2023 Lifeng Han, Serge Gladkoff, Gleb Erofeev, Irina Sorokina, Betty Galiano, Goran Nenadic

Furthermore, to address the language resource imbalance issue, we also carry out experiments using a transfer learning methodology based on massive multilingual pre-trained language models (MMPLMs).

Clinical Knowledge Language Modelling +3

Investigating Massive Multilingual Pre-Trained Machine Translation Models for Clinical Domain via Transfer Learning

no code implementations12 Oct 2022 Lifeng Han, Gleb Erofeev, Irina Sorokina, Serge Gladkoff, Goran Nenadic

To the best of our knowledge, this is the first work on using MMPLMs towards \textit{clinical domain transfer-learning NMT} successfully for totally unseen languages during pre-training.

Machine Translation NMT +3

Examining Large Pre-Trained Language Models for Machine Translation: What You Don't Know About It

no code implementations15 Sep 2022 Lifeng Han, Gleb Erofeev, Irina Sorokina, Serge Gladkoff, Goran Nenadic

Pre-trained language models (PLMs) often take advantage of the monolingual and multilingual dataset that is freely available online to acquire general or mixed domain knowledge before deployment into specific tasks.

Machine Translation

Measuring Uncertainty in Translation Quality Evaluation (TQE)

no code implementations LREC 2022 Serge Gladkoff, Irina Sorokina, Lifeng Han, Alexandra Alekseeva

From both human translators (HT) and machine translation (MT) researchers' point of view, translation quality evaluation (TQE) is an essential task.

Machine Translation Translation

cushLEPOR: customising hLEPOR metric using Optuna for higher agreement with human judgments or pre-trained language model LaBSE

1 code implementation WMT (EMNLP) 2021 Lifeng Han, Irina Sorokina, Gleb Erofeev, Serge Gladkoff

Then we present the customised hLEPOR (cushLEPOR) which uses Optuna hyper-parameter optimisation framework to fine-tune hLEPOR weighting parameters towards better agreement to pre-trained language models (using LaBSE) regarding the exact MT language pairs that cushLEPOR is deployed to.

Language Modelling

Cannot find the paper you are looking for? You can Submit a new open access paper.