no code implementations • 15 Jul 2024 • Shravan Nayak, Kanishk Jain, Rabiul Awal, Siva Reddy, Sjoerd van Steenkiste, Lisa Anne Hendricks, Karolina Stańczak, Aishwarya Agrawal
Benchmarking VLMs on CulturalVQA, including GPT-4V and Gemini, reveals disparity in their level of cultural understanding across regions, with strong cultural understanding capabilities for North America while significantly lower performance for Africa.
no code implementations • 15 Mar 2024 • Karolina Stańczak
Gender bias represents a form of systematic negative treatment that targets individuals based on their gender.
no code implementations • 30 Nov 2023 • Karolina Stańczak, Kevin Du, Adina Williams, Isabelle Augenstein, Ryan Cotterell
However, when we control for the meaning of the noun, we find that grammatical gender has a near-zero effect on adjective choice, thereby calling the neo-Whorfian hypothesis into question.
no code implementations • 15 Nov 2023 • Marta Marchiori Manerba, Karolina Stańczak, Riccardo Guidotti, Isabelle Augenstein
While the impact of social biases in language models has been recognized, prior methods for bias evaluation have been limited to binary association tests on small datasets, limiting our understanding of bias complexities.
1 code implementation • 21 May 2023 • Nadav Borenstein, Karolina Stańczak, Thea Rolskov, Natália da Silva Perez, Natacha Klein Käfer, Isabelle Augenstein
We find that there is a trade-off between the stability of the word embeddings and their compatibility with the historical dataset.
Optical Character Recognition Optical Character Recognition (OCR) +1
no code implementations • 12 Apr 2023 • Sandra Martinková, Karolina Stańczak, Isabelle Augenstein
Perhaps surprisingly, Czech, Slovak, and Polish language models produce more hurtful completions with men as subjects, which, upon inspection, we find is due to completions being related to violence, death, and sickness.
2 code implementations • NAACL 2022 • Karolina Stańczak, Edoardo Ponti, Lucas Torroba Hennigen, Ryan Cotterell, Isabelle Augenstein
The success of multilingual pre-trained models is underpinned by their ability to learn representations shared by multiple languages even in absence of any explicit supervision.
2 code implementations • 20 Jan 2022 • Karolina Stańczak, Lucas Torroba Hennigen, Adina Williams, Ryan Cotterell, Isabelle Augenstein
The success of pre-trained contextualized representations has prompted researchers to analyze them for the presence of linguistic information.
1 code implementation • 22 Dec 2021 • Sara Marjanovic, Karolina Stańczak, Isabelle Augenstein
Rather than overt hostile or benevolent sexism, the results of the nominal and lexical analyses suggest this interest is not as professional or respectful as that expressed about male politicians.
1 code implementation • 15 Apr 2021 • Karolina Stańczak, Sagnik Ray Choudhury, Tiago Pimentel, Ryan Cotterell, Isabelle Augenstein
Recent research has demonstrated that large pre-trained language models reflect societal biases expressed in natural language.