1 code implementation • 24 Oct 2022 • Arne Binder, Bhuvanesh Verma, Leonhard Hennig
In this work, we introduce a sequential pipeline model combining ADUR and ARE for full-text SAM, and provide a first analysis of the performance of pretrained language models (PLMs) on both subtasks.
1 code implementation • RepL4NLP (ACL) 2022 • Yuxuan Chen, Jonas Mikkelsen, Arne Binder, Christoph Alt, Leonhard Hennig
Pre-trained language models (PLM) are effective components of few-shot named entity recognition (NER) approaches when augmented with continued pre-training on task-specific out-of-domain data or fine-tuning on in-domain data.
Contrastive Learning Low Resource Named Entity Recognition +4
no code implementations • LREC 2020 • Eduardo Cortes, Vinicius Woloszyn, Arne Binder, Tilo Himmelsbach, Dante Barone, Sebastian M{\"o}ller
This work makes an extensible review of the most recent methods for Question Classification, taking into consideration their applicability in low-resourced languages.