Improving multilingual language models capabilities in low-resource languages is generally difficult due to the scarcity of large-scale data in those languages.
However, the available NLP literature disagrees on the efficacy of this technique - it remains unclear for which tasks and scenarios it can help, and the role of the individual factors in sociodemographic prompting is still unexplored.
The open-access dissemination of pretrained language models through online repositories has led to a democratization of state-of-the-art natural language processing (NLP) research.
This work investigates the use of interactively updated label suggestions to improve upon the efficiency of gathering annotations on the task of opinion mining in German Covid-19 social media data.
Massively pre-trained transformer models are computationally expensive to fine-tune, slow for inference, and have large storage requirements.
We experiment with two recent contextualized word embedding methods (ELMo and BERT) in the context of open-domain argument search.