Search Results for author: Robert Wolfe

Found 9 papers, 5 papers with code

Evaluating Biased Attitude Associations of Language Models in an Intersectional Context

1 code implementation7 Jul 2023 Shiva Omrani Sabbaghi, Robert Wolfe, Aylin Caliskan

Adapting the projection-based approach to embedding association tests that quantify bias, we find that language models exhibit the most biased attitudes against gender identity, social class, and sexual orientation signals in language.

Sentence Word Embeddings

Contrastive Language-Vision AI Models Pretrained on Web-Scraped Multimodal Data Exhibit Sexual Objectification Bias

1 code implementation21 Dec 2022 Robert Wolfe, Yiwei Yang, Bill Howe, Aylin Caliskan

A first experiment uses standardized images of women from the Sexual OBjectification and EMotion Database, and finds that human characteristics are disassociated from images of objectified women: the model's recognition of emotional state is mediated by whether the subject is fully or partially clothed.

American == White in Multimodal Language-and-Image AI

no code implementations1 Jul 2022 Robert Wolfe, Aylin Caliskan

In an image captioning task, BLIP remarks upon the race of Asian individuals as much as 36% of the time, but never remarks upon race for White individuals.

Image Captioning Question Answering +1

Gender Bias in Word Embeddings: A Comprehensive Analysis of Frequency, Syntax, and Semantics

no code implementations7 Jun 2022 Aylin Caliskan, Pimparkar Parth Ajay, Tessa Charlesworth, Robert Wolfe, Mahzarin R. Banaji

Using the Single-Category Word Embedding Association Test, we demonstrate the widespread prevalence of gender biases that also show differences in: (1) frequencies of words associated with men versus women; (b) part-of-speech tags in gender-associated words; (c) semantic categories in gender-associated words; and (d) valence, arousal, and dominance in gender-associated words.

Word Embeddings

Markedness in Visual Semantic AI

1 code implementation23 May 2022 Robert Wolfe, Aylin Caliskan

The model is more likely to rank the unmarked "person" label higher than labels denoting gender for Male individuals (26. 7% of the time) vs.

Evidence for Hypodescent in Visual Semantic AI

1 code implementation22 May 2022 Robert Wolfe, Mahzarin R. Banaji, Aylin Caliskan

We examine the state-of-the-art multimodal "visual semantic" model CLIP ("Contrastive Language Image Pretraining") for the rule of hypodescent, or one-drop rule, whereby multiracial people are more likely to be assigned a racial or ethnic label corresponding to a minority or disadvantaged racial or ethnic group than to the equivalent majority or advantaged group.

MORPH

Contrastive Visual Semantic Pretraining Magnifies the Semantics of Natural Language Representations

no code implementations ACL 2022 Robert Wolfe, Aylin Caliskan

We find that contrastive visual semantic pretraining significantly mitigates the anisotropy found in contextualized word embeddings from GPT-2, such that the intra-layer self-similarity (mean pairwise cosine similarity) of CLIP word embeddings is under . 25 in all layers, compared to greater than . 95 in the top layer of GPT-2.

Image Captioning Semantic Textual Similarity +3

VAST: The Valence-Assessing Semantics Test for Contextualizing Language Models

1 code implementation14 Mar 2022 Robert Wolfe, Aylin Caliskan

VAST, the Valence-Assessing Semantics Test, is a novel intrinsic evaluation task for contextualized word embeddings (CWEs).

Word Embeddings Word Similarity

Low Frequency Names Exhibit Bias and Overfitting in Contextualizing Language Models

no code implementations EMNLP 2021 Robert Wolfe, Aylin Caliskan

Moreover, we find Spearman's r between racial bias and name frequency in BERT of . 492, indicating that lower-frequency minority group names are more associated with unpleasantness.

Cannot find the paper you are looking for? You can Submit a new open access paper.