Search Results for author: Karolina Stańczak

Found 10 papers, 5 papers with code

Benchmarking Vision Language Models for Cultural Understanding

no code implementations15 Jul 2024 Shravan Nayak, Kanishk Jain, Rabiul Awal, Siva Reddy, Sjoerd van Steenkiste, Lisa Anne Hendricks, Karolina Stańczak, Aishwarya Agrawal

Benchmarking VLMs on CulturalVQA, including GPT-4V and Gemini, reveals disparity in their level of cultural understanding across regions, with strong cultural understanding capabilities for North America while significantly lower performance for Africa.

Benchmarking Question Answering +2

A Multilingual Perspective on Probing Gender Bias

no code implementations15 Mar 2024 Karolina Stańczak

Gender bias represents a form of systematic negative treatment that targets individuals based on their gender.

Grammatical Gender's Influence on Distributional Semantics: A Causal Perspective

no code implementations30 Nov 2023 Karolina Stańczak, Kevin Du, Adina Williams, Isabelle Augenstein, Ryan Cotterell

However, when we control for the meaning of the noun, we find that grammatical gender has a near-zero effect on adjective choice, thereby calling the neo-Whorfian hypothesis into question.

Social Bias Probing: Fairness Benchmarking for Language Models

no code implementations15 Nov 2023 Marta Marchiori Manerba, Karolina Stańczak, Riccardo Guidotti, Isabelle Augenstein

While the impact of social biases in language models has been recognized, prior methods for bias evaluation have been limited to binary association tests on small datasets, limiting our understanding of bias complexities.

Benchmarking Fairness +1

Measuring Gender Bias in West Slavic Language Models

no code implementations12 Apr 2023 Sandra Martinková, Karolina Stańczak, Isabelle Augenstein

Perhaps surprisingly, Czech, Slovak, and Polish language models produce more hurtful completions with men as subjects, which, upon inspection, we find is due to completions being related to violence, death, and sickness.

Language Modelling

Same Neurons, Different Languages: Probing Morphosyntax in Multilingual Pre-trained Models

2 code implementations NAACL 2022 Karolina Stańczak, Edoardo Ponti, Lucas Torroba Hennigen, Ryan Cotterell, Isabelle Augenstein

The success of multilingual pre-trained models is underpinned by their ability to learn representations shared by multiple languages even in absence of any explicit supervision.

A Latent-Variable Model for Intrinsic Probing

2 code implementations20 Jan 2022 Karolina Stańczak, Lucas Torroba Hennigen, Adina Williams, Ryan Cotterell, Isabelle Augenstein

The success of pre-trained contextualized representations has prompted researchers to analyze them for the presence of linguistic information.

Attribute

Quantifying Gender Biases Towards Politicians on Reddit

1 code implementation22 Dec 2021 Sara Marjanovic, Karolina Stańczak, Isabelle Augenstein

Rather than overt hostile or benevolent sexism, the results of the nominal and lexical analyses suggest this interest is not as professional or respectful as that expressed about male politicians.

Bias Detection Gender Bias Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.