no code implementations • COLING 2022 • Pranav Narayanan Venkit, Mukund Srinath, Shomir Wilson
Pretrained language models (PLMs) have been shown to exhibit sociodemographic biases, such as against gender and race, raising concerns of downstream biases in language technologies.
no code implementations • 11 Apr 2024 • Pranav Narayanan Venkit, Tatiana Chakravorti, Vipul Gupta, Heidi Biggs, Mukund Srinath, Koustava Goswami, Sarah Rajtmajer, Shomir Wilson
We investigate how hallucination in large language models (LLM) is characterized in peer-reviewed literature using a critical examination of 103 publications across NLP research.
no code implementations • 16 Mar 2024 • Sanjana Gautam, Pranav Narayanan Venkit, Sourojit Ghosh
With the widespread adoption of advanced generative models such as Gemini and GPT, there has been a notable increase in the incorporation of such models into sociotechnical systems, categorized under AI-as-a-Service (AIaaS).
no code implementations • 18 Oct 2023 • Pranav Narayanan Venkit, Mukund Srinath, Sanjana Gautam, Saranya Venkatraman, Vipul Gupta, Rebecca J. Passonneau, Shomir Wilson
We conduct an inquiry into the sociotechnical aspects of sentiment analysis (SA) by critically examining 189 peer-reviewed papers on their applications, models, and datasets.
1 code implementation • 24 Aug 2023 • Vipul Gupta, Pranav Narayanan Venkit, Hugo Laurençon, Shomir Wilson, Rebecca J. Passonneau
We apply CALM to 20 large language models, and find that for 2 language model series, larger parameter models tend to be more biased than smaller ones.
no code implementations • 24 Aug 2023 • Pranav Narayanan Venkit
The rapid growth in the usage and applications of Natural Language Processing (NLP) in various sociotechnical solutions has highlighted the need for a comprehensive understanding of bias and its impact on society.
no code implementations • 8 Aug 2023 • Pranav Narayanan Venkit, Sanjana Gautam, Ruchi Panchanadikar, Ting-Hao `Kenneth' Huang, Shomir Wilson
We investigate the potential for nationality biases in natural language processing (NLP) models using human evaluation methods.
no code implementations • 18 Jul 2023 • Pranav Narayanan Venkit, Mukund Srinath, Shomir Wilson
We analyze sentiment analysis and toxicity detection models to detect the presence of explicit bias against people with disability (PWD).
no code implementations • 13 Jun 2023 • Vipul Gupta, Pranav Narayanan Venkit, Shomir Wilson, Rebecca J. Passonneau
This paper presents a comprehensive survey of work on sociodemographic bias in language models (LMs).
no code implementations • 5 Feb 2023 • Pranav Narayanan Venkit, Sanjana Gautam, Ruchi Panchanadikar, Ting-Hao 'Kenneth' Huang, Shomir Wilson
Little attention is placed on analyzing nationality bias in language models, especially when nationality is highly used as a factor in increasing the performance of social NLP models.
no code implementations • 25 Nov 2021 • Pranav Narayanan Venkit, Shomir Wilson
The results show that all exhibit strong negative biases on sentences that mention disability.