Search Results for author: Pranav Narayanan Venkit

Found 11 papers, 1 papers with code

A Study of Implicit Bias in Pretrained Language Models against People with Disabilities

no code implementations COLING 2022 Pranav Narayanan Venkit, Mukund Srinath, Shomir Wilson

Pretrained language models (PLMs) have been shown to exhibit sociodemographic biases, such as against gender and race, raising concerns of downstream biases in language technologies.

"Confidently Nonsensical?'': A Critical Survey on the Perspectives and Challenges of 'Hallucinations' in NLP

no code implementations11 Apr 2024 Pranav Narayanan Venkit, Tatiana Chakravorti, Vipul Gupta, Heidi Biggs, Mukund Srinath, Koustava Goswami, Sarah Rajtmajer, Shomir Wilson

We investigate how hallucination in large language models (LLM) is characterized in peer-reviewed literature using a critical examination of 103 publications across NLP research.

Hallucination

From Melting Pots to Misrepresentations: Exploring Harms in Generative AI

no code implementations16 Mar 2024 Sanjana Gautam, Pranav Narayanan Venkit, Sourojit Ghosh

With the widespread adoption of advanced generative models such as Gemini and GPT, there has been a notable increase in the incorporation of such models into sociotechnical systems, categorized under AI-as-a-Service (AIaaS).

The Sentiment Problem: A Critical Survey towards Deconstructing Sentiment Analysis

no code implementations18 Oct 2023 Pranav Narayanan Venkit, Mukund Srinath, Sanjana Gautam, Saranya Venkatraman, Vipul Gupta, Rebecca J. Passonneau, Shomir Wilson

We conduct an inquiry into the sociotechnical aspects of sentiment analysis (SA) by critically examining 189 peer-reviewed papers on their applications, models, and datasets.

Ethics Sentiment Analysis

CALM : A Multi-task Benchmark for Comprehensive Assessment of Language Model Bias

1 code implementation24 Aug 2023 Vipul Gupta, Pranav Narayanan Venkit, Hugo Laurençon, Shomir Wilson, Rebecca J. Passonneau

We apply CALM to 20 large language models, and find that for 2 language model series, larger parameter models tend to be more biased than smaller ones.

Language Modelling Natural Language Inference +4

Towards a Holistic Approach: Understanding Sociodemographic Biases in NLP Models using an Interdisciplinary Lens

no code implementations24 Aug 2023 Pranav Narayanan Venkit

The rapid growth in the usage and applications of Natural Language Processing (NLP) in various sociotechnical solutions has highlighted the need for a comprehensive understanding of bias and its impact on society.

Automated Ableism: An Exploration of Explicit Disability Biases in Sentiment and Toxicity Analysis Models

no code implementations18 Jul 2023 Pranav Narayanan Venkit, Mukund Srinath, Shomir Wilson

We analyze sentiment analysis and toxicity detection models to detect the presence of explicit bias against people with disability (PWD).

Sentiment Analysis

Sociodemographic Bias in Language Models: A Survey and Forward Path

no code implementations13 Jun 2023 Vipul Gupta, Pranav Narayanan Venkit, Shomir Wilson, Rebecca J. Passonneau

This paper presents a comprehensive survey of work on sociodemographic bias in language models (LMs).

Nationality Bias in Text Generation

no code implementations5 Feb 2023 Pranav Narayanan Venkit, Sanjana Gautam, Ruchi Panchanadikar, Ting-Hao 'Kenneth' Huang, Shomir Wilson

Little attention is placed on analyzing nationality bias in language models, especially when nationality is highly used as a factor in increasing the performance of social NLP models.

Text Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.