Search Results for author: Anjishnu Mukherjee

Found 4 papers, 3 papers with code

BiasDora: Exploring Hidden Biased Associations in Vision-Language Models

1 code implementation2 Jul 2024 Chahat Raj, Anjishnu Mukherjee, Aylin Caliskan, Antonios Anastasopoulos, Ziwei Zhu

Existing works examining Vision-Language Models (VLMs) for social biases predominantly focus on a limited set of documented bias associations, such as gender:profession or race:crime.

Breaking Bias, Building Bridges: Evaluation and Mitigation of Social Biases in LLMs via Contact Hypothesis

no code implementations2 Jul 2024 Chahat Raj, Anjishnu Mukherjee, Aylin Caliskan, Antonios Anastasopoulos, Ziwei Zhu

We propose a unique debiasing technique, Social Contact Debiasing (SCD), that instruction-tunes these models with unbiased responses to prompts.

Crossroads of Continents: Automated Artifact Extraction for Cultural Adaptation with Large Multimodal Models

1 code implementation2 Jul 2024 Anjishnu Mukherjee, Ziwei Zhu, Antonios Anastasopoulos

We present a comprehensive three-phase study to examine (1) the cultural understanding of Large Multimodal Models (LMMs) by introducing DalleStreet, a large-scale dataset generated by DALL-E 3 and validated by humans, containing 9, 935 images of 67 countries and 10 concept classes; (2) the underlying implicit and potentially stereotypical cultural associations with a cultural artifact extraction task; and (3) an approach to adapt cultural representation in an image based on extracted associations using a modular pipeline, CultureAdapt.

Global Voices, Local Biases: Socio-Cultural Prejudices across Languages

1 code implementation26 Oct 2023 Anjishnu Mukherjee, Chahat Raj, Ziwei Zhu, Antonios Anastasopoulos

Finally, we highlight the significance of these social biases and the new dimensions through an extensive comparison of embedding methods, reinforcing the need to address them in pursuit of more equitable language models.

Cannot find the paper you are looking for? You can Submit a new open access paper.