Search Results for author: Matthias Aßenmacher

Found 10 papers, 7 papers with code

Pre-trained language models evaluating themselves - A comparative study

1 code implementation insights (ACL) 2022 Philipp Koch, Matthias Aßenmacher, Christian Heumann

Evaluating generated text received new attention with the introduction of model-based metrics in recent years.

Negation

How Prevalent is Gender Bias in ChatGPT? -- Exploring German and English ChatGPT Responses

1 code implementation21 Sep 2023 Stefanie Urchs, Veronika Thurner, Matthias Aßenmacher, Christian Heumann, Stephanie Thiemichen

With the introduction of ChatGPT, OpenAI made large language models (LLM) accessible to users with limited IT expertise.

Classifying multilingual party manifestos: Domain transfer across country, time, and genre

1 code implementation31 Jul 2023 Matthias Aßenmacher, Nadja Sauter, Christian Heumann

We explore the potential of domain transfer across geographical locations, languages, time, and genre in a large-scale database of political manifestos.

domain classification

How Different Is Stereotypical Bias Across Languages?

1 code implementation14 Jul 2023 Ibrahim Tolga Öztürk, Rostislav Nedelchev, Christian Heumann, Esteban Garces Arias, Marius Roger, Bernd Bischl, Matthias Aßenmacher

Recent studies have demonstrated how to assess the stereotypical bias in pre-trained English language models.

ActiveGLAE: A Benchmark for Deep Active Learning with Transformers

1 code implementation16 Jun 2023 Lukas Rauch, Matthias Aßenmacher, Denis Huseljic, Moritz Wirth, Bernd Bischl, Bernhard Sick

Deep active learning (DAL) seeks to reduce annotation costs by enabling the model to actively query instance annotations from which it expects to learn the most.

Active Learning

Multimodal Deep Learning

1 code implementation12 Jan 2023 Cem Akkus, Luyang Chu, Vladana Djakovic, Steffen Jauch-Walser, Philipp Koch, Giacomo Loss, Christopher Marquardt, Marco Moldovan, Nadja Sauter, Maximilian Schneider, Rickmer Schulte, Karol Urbanczyk, Jann Goschenhofer, Christian Heumann, Rasmus Hvingelby, Daniel Schalk, Matthias Aßenmacher

This book is the result of a seminar in which we reviewed multimodal approaches and attempted to create a solid overview of the field, starting with the current state-of-the-art approaches in the two subfields of Deep Learning individually.

Multimodal Deep Learning Representation Learning

On the comparability of Pre-trained Language Models

no code implementations3 Jan 2020 Matthias Aßenmacher, Christian Heumann

It is not always obvious where these improvements originate from, as it is not possible to completely disentangle the contributions of the three driving forces.

Cloud Computing Language Modelling +2

Cannot find the paper you are looking for? You can Submit a new open access paper.