Search Results for author: Valentin Hofmann

Found 17 papers, 10 papers with code

Dialect prejudice predicts AI decisions about people's character, employability, and criminality

1 code implementation1 Mar 2024 Valentin Hofmann, Pratyusha Ria Kalluri, Dan Jurafsky, Sharese King

Here, we demonstrate that language models embody covert racism in the form of dialect prejudice: we extend research showing that Americans hold raciolinguistic stereotypes about speakers of African American English and find that language models have the same prejudice, exhibiting covert stereotypes that are more negative than any human stereotypes about African Americans ever experimentally recorded, although closest to the ones from before the civil rights movement.

Political Compass or Spinning Arrow? Towards More Meaningful Evaluations for Values and Opinions in Large Language Models

1 code implementation26 Feb 2024 Paul Röttger, Valentin Hofmann, Valentina Pyatkin, Musashi Hinck, Hannah Rose Kirk, Hinrich Schütze, Dirk Hovy

Motivated by this discrepancy, we challenge the prevailing constrained evaluation paradigm for values and opinions in LLMs and explore more realistic unconstrained evaluations.

Multiple-choice

Graph-enhanced Large Language Models in Asynchronous Plan Reasoning

no code implementations5 Feb 2024 Fangru Lin, Emanuele La Malfa, Valentin Hofmann, Elle Michelle Yang, Anthony Cohn, Janet B. Pierrehumbert

Reasoning about asynchronous plans is challenging since it requires sequential and parallel planning to optimize time costs.

Paloma: A Benchmark for Evaluating Language Model Fit

no code implementations16 Dec 2023 Ian Magnusson, Akshita Bhagia, Valentin Hofmann, Luca Soldaini, Ananya Harsh Jha, Oyvind Tafjord, Dustin Schwenk, Evan Pete Walsh, Yanai Elazar, Kyle Lo, Dirk Groeneveld, Iz Beltagy, Hannaneh Hajishirzi, Noah A. Smith, Kyle Richardson, Jesse Dodge

We invite submissions to our benchmark and organize results by comparability based on compliance with guidelines such as removal of benchmark contamination from pretraining.

Language Modelling

CaMEL: Case Marker Extraction without Labels

1 code implementation ACL 2022 Leonie Weissweiler, Valentin Hofmann, Masoud Jalili Sabet, Hinrich Schütze

We introduce CaMEL (Case Marker Extraction without Labels), a novel and challenging task in computational morphology that is especially relevant for low-resource languages.

Geographic Adaptation of Pretrained Language Models

no code implementations16 Mar 2022 Valentin Hofmann, Goran Glavaš, Nikola Ljubešić, Janet B. Pierrehumbert, Hinrich Schütze

While pretrained language models (PLMs) have been shown to possess a plethora of linguistic knowledge, the existing body of research has largely neglected extralinguistic knowledge, which is generally difficult to obtain by pretraining on text alone.

Language Identification Language Modelling +2

Modeling Ideological Salience and Framing in Polarized Online Groups with Graph Neural Networks and Structured Sparsity

1 code implementation Findings (NAACL) 2022 Valentin Hofmann, Xiaowen Dong, Janet B. Pierrehumbert, Hinrich Schütze

The increasing polarization of online political discourse calls for computational tools that automatically detect and monitor ideological divides in social media.

Dynamic Contextualized Word Embeddings

1 code implementation ACL 2021 Valentin Hofmann, Janet B. Pierrehumbert, Hinrich Schütze

Static word embeddings that represent words by a single vector cannot capture the variability of word meaning in different linguistic and extralinguistic contexts.

Language Modelling Word Embeddings

A Graph Auto-encoder Model of Derivational Morphology

no code implementations ACL 2020 Valentin Hofmann, Hinrich Sch{\"u}tze, Janet Pierrehumbert

The auto-encoder models MWF in English surprisingly well by combining syntactic and semantic information with associative information from the mental lexicon.

Predicting the Growth of Morphological Families from Social and Linguistic Factors

no code implementations ACL 2020 Valentin Hofmann, Janet Pierrehumbert, Hinrich Sch{\"u}tze

We present the first study that examines the evolution of morphological families, i. e., sets of morphologically related words such as {``}trump{''}, {``}antitrumpism{''}, and {``}detrumpify{''}, in social media.

Cannot find the paper you are looking for? You can Submit a new open access paper.