Search Results for author: Thomas Kleinbauer

Found 11 papers, 7 papers with code

TOKEN is a MASK: Few-shot Named Entity Recognition with Pre-trained Language Models

1 code implementation15 Jun 2022 Ali Davody, David Ifeoluwa Adelani, Thomas Kleinbauer, Dietrich Klakow

Transferring knowledge from one domain to another is of practical importance for many tasks in natural language processing, especially when the amount of available data in the target domain is limited.

Descriptive Domain Adaptation +3

Exploiting Social Media Content for Self-Supervised Style Transfer

1 code implementation NAACL (SocialNLP) 2022 Dana Ruiter, Thomas Kleinbauer, Cristina España-Bonet, Josef van Genabith, Dietrich Klakow

Recent research on style transfer takes inspiration from unsupervised neural machine translation (UNMT), learning from large amounts of non-parallel data by exploiting cycle consistency loss, back-translation, and denoising autoencoders.

Attribute Denoising +4

Placing M-Phasis on the Plurality of Hate: A Feature-Based Corpus of Hate Online

1 code implementation LREC 2022 Dana Ruiter, Liane Reiners, Ashwin Geet D'Sa, Thomas Kleinbauer, Dominique Fohr, Irina Illina, Dietrich Klakow, Christian Schemer, Angeliki Monnier

Even though hate speech (HS) online has been an important object of research in the last decade, most HS-related corpora over-simplify the phenomenon of hate by attempting to label user comments as "hate" or "neutral".

Hate Speech Detection

Preventing Author Profiling through Zero-Shot Multilingual Back-Translation

1 code implementation EMNLP 2021 David Ifeoluwa Adelani, Miaoran Zhang, Xiaoyu Shen, Ali Davody, Thomas Kleinbauer, Dietrich Klakow

Documents as short as a single sentence may inadvertently reveal sensitive information about their authors, including e. g. their gender or ethnicity.

Sentence Style Transfer +2

Modeling Profanity and Hate Speech in Social Media with Semantic Subspaces

1 code implementation ACL (WOAH) 2021 Vanessa Hahn, Dana Ruiter, Thomas Kleinbauer, Dietrich Klakow

We observe that, on both similar and distant target tasks and across all languages, the subspace-based representations transfer more effectively than standard BERT representations in the zero-shot setting, with improvements between F1 +10. 9 and F1 +42. 9 over the baselines across all tested monolingual and cross-lingual scenarios.

Sentence

Privacy Guarantees for De-identifying Text Transformations

1 code implementation7 Aug 2020 David Ifeoluwa Adelani, Ali Davody, Thomas Kleinbauer, Dietrich Klakow

Machine Learning approaches to Natural Language Processing tasks benefit from a comprehensive collection of real-life user data.

BIG-bench Machine Learning De-identification +6

On the effect of normalization layers on Differentially Private training of deep Neural networks

1 code implementation19 Jun 2020 Ali Davody, David Ifeoluwa Adelani, Thomas Kleinbauer, Dietrich Klakow

Differentially private stochastic gradient descent (DPSGD) is a variation of stochastic gradient descent based on the Differential Privacy (DP) paradigm, which can mitigate privacy threats that arise from the presence of sensitive information in training data.

Cannot find the paper you are looking for? You can Submit a new open access paper.