Search Results for author: Kimia Hamidieh

Found 7 papers, 1 papers with code

BendVLM: Test-Time Debiasing of Vision-Language Embeddings

1 code implementation7 Nov 2024 Walter Gerych, Haoran Zhang, Kimia Hamidieh, Eileen Pan, Maanas Sharma, Thomas Hartvigsen, Marzyeh Ghassemi

Vision-language model (VLM) embeddings have been shown to encode biases present in their training data, such as societal biases that prescribe negative characteristics to members of various racial and gender identities.

Attribute Image Generation +1

Identifying Implicit Social Biases in Vision-Language Models

no code implementations1 Nov 2024 Kimia Hamidieh, Haoran Zhang, Walter Gerych, Thomas Hartvigsen, Marzyeh Ghassemi

Finally, we conduct an analysis of the source of such biases, by showing that the same harmful stereotypes are also present in a large image-text dataset used to train CLIP models for examples of biases that we find.

Fairness

Views Can Be Deceiving: Improved SSL Through Feature Space Augmentation

no code implementations28 May 2024 Kimia Hamidieh, Haoran Zhang, Swami Sankaranarayanan, Marzyeh Ghassemi

Despite the growing popularity of methods which learn from unlabeled data, the extent to which these representations rely on spurious features for prediction is unclear.

Representation Learning Self-Supervised Learning

Selective Classification Via Neural Network Training Dynamics

no code implementations26 May 2022 Stephan Rabanser, Anvith Thudi, Kimia Hamidieh, Adam Dziedzic, Nicolas Papernot

Selective classification is the task of rejecting inputs a model would predict incorrectly on through a trade-off between input space coverage and model accuracy.

Classification

The Road to Explainability is Paved with Bias: Measuring the Fairness of Explanations

no code implementations6 May 2022 Aparna Balagopalan, Haoran Zhang, Kimia Hamidieh, Thomas Hartvigsen, Frank Rudzicz, Marzyeh Ghassemi

Across two different blackbox model architectures and four popular explainability methods, we find that the approximation quality of explanation models, also known as the fidelity, differs significantly between subgroups.

BIG-bench Machine Learning Fairness

Cannot find the paper you are looking for? You can Submit a new open access paper.