no code implementations • LREC 2022 • James Barry, Joachim Wagner, Lauren Cassidy, Alan Cowap, Teresa Lynn, Abigail Walsh, Mícheál J. Ó Meachair, Jennifer Foster
We compare our gaBERT model to multilingual BERT and the monolingual Irish WikiBERT, and we show that gaBERT provides better representations for a downstream parsing task.
no code implementations • NAACL (TeachingNLP) 2021 • Jennifer Foster, Joachim Wagner
We describe two Jupyter notebooks that form the basis of two assignments in an introductory Natural Language Processing (NLP) module taught to final year undergraduate students at Dublin City University.
2 code implementations • EMNLP 2021 • Joachim Wagner, Jennifer Foster
We compare two orthogonal semi-supervised learning techniques, namely tri-training and pretrained word embeddings, in the task of dependency parsing.
1 code implementation • 27 Jul 2021 • James Barry, Joachim Wagner, Lauren Cassidy, Alan Cowap, Teresa Lynn, Abigail Walsh, Mícheál J. Ó Meachair, Jennifer Foster
We compare our gaBERT model to multilingual BERT and the monolingual Irish WikiBERT, and we show that gaBERT provides better representations for a downstream parsing task.
1 code implementation • ACL (IWPT) 2021 • James Barry, Alireza Mohammadshahi, Joachim Wagner, Jennifer Foster, James Henderson
The task involves parsing Enhanced UD graphs, which are an extension of the basic dependency trees designed to be more facilitative towards representing semantic structure.
2 code implementations • WS 2020 • James Barry, Joachim Wagner, Jennifer Foster
Unfortunately, we did not ensure a connected graph as part of our pipeline approach and our competition submission relied on a last-minute fix to pass the validation script which harmed our official evaluation scores significantly.
1 code implementation • ACL 2020 • Joachim Wagner, James Barry, Jennifer Foster
A recent advance in monolingual dependency parsing is the idea of a treebank embedding vector, which allows all treebanks for a particular language to be used as training data while at the same time allowing the model to prefer training data from one treebank over others and to select the preferred treebank at test time.
1 code implementation • WS 2019 • James Barry, Joachim Wagner, Jennifer Foster
Finally, we apply multi-treebank modelling to the projected treebanks, in addition to or alternatively to polyglot modelling on the source side.
no code implementations • WS 2019 • Dimitar Shterionov, Joachim Wagner, F{\'e}lix do Carmo
Automatic post-editing (APE) can be reduced to a machine translation (MT) task, where the source is the output of a specific MT system and the target is its post-edited variant.