Search Results for author: Tilman Beck

Found 9 papers, 8 papers with code

Zero-shot Sentiment Analysis in Low-Resource Languages Using a Multilingual Sentiment Lexicon

no code implementations3 Feb 2024 Fajri Koto, Tilman Beck, Zeerak Talat, Iryna Gurevych, Timothy Baldwin

Improving multilingual language models capabilities in low-resource languages is generally difficult due to the scarcity of large-scale data in those languages.

Sentence Sentiment Analysis

Sensitivity, Performance, Robustness: Deconstructing the Effect of Sociodemographic Prompting

1 code implementation13 Sep 2023 Tilman Beck, Hendrik Schuff, Anne Lauscher, Iryna Gurevych

However, the available NLP literature disagrees on the efficacy of this technique - it remains unclear for which tasks and scenarios it can help, and the role of the individual factors in sociodemographic prompting is still unexplored.

Hate Speech Detection Zero-Shot Learning

AdapterHub Playground: Simple and Flexible Few-Shot Learning with Adapters

1 code implementation ACL 2022 Tilman Beck, Bela Bohlender, Christina Viehmann, Vincent Hane, Yanik Adamson, Jaber Khuri, Jonas Brossmann, Jonas Pfeiffer, Iryna Gurevych

The open-access dissemination of pretrained language models through online repositories has led to a democratization of state-of-the-art natural language processing (NLP) research.

Few-Shot Learning Transfer Learning

Investigating label suggestions for opinion mining in German Covid-19 social media

1 code implementation ACL 2021 Tilman Beck, Ji-Ung Lee, Christina Viehmann, Marcus Maurer, Oliver Quiring, Iryna Gurevych

This work investigates the use of interactively updated label suggestions to improve upon the efficiency of gathering annotations on the task of opinion mining in German Covid-19 social media data.

Opinion Mining Transfer Learning

AdapterDrop: On the Efficiency of Adapters in Transformers

1 code implementation EMNLP 2021 Andreas Rücklé, Gregor Geigle, Max Glockner, Tilman Beck, Jonas Pfeiffer, Nils Reimers, Iryna Gurevych

Massively pre-trained transformer models are computationally expensive to fine-tune, slow for inference, and have large storage requirements.

Cannot find the paper you are looking for? You can Submit a new open access paper.