Search Results for author: Akshita Jha

Found 7 papers, 4 papers with code

ViSAGe: A Global-Scale Analysis of Visual Stereotypes in Text-to-Image Generation

no code implementations12 Jan 2024 Akshita Jha, Vinodkumar Prabhakaran, Remi Denton, Sarah Laszlo, Shachi Dave, Rida Qadri, Chandan K. Reddy, Sunipa Dev

First, we show that stereotypical attributes in ViSAGe are thrice as likely to be present in generated images of corresponding identities as compared to other attributes, and that the offensiveness of these depictions is especially higher for identities from Africa, South America, and South East Asia.

Text-to-Image Generation

SeeGULL: A Stereotype Benchmark with Broad Geo-Cultural Coverage Leveraging Generative Models

1 code implementation19 May 2023 Akshita Jha, Aida Davani, Chandan K. Reddy, Shachi Dave, Vinodkumar Prabhakaran, Sunipa Dev

Stereotype benchmark datasets are crucial to detect and mitigate social stereotypes about groups of people in NLP models.

Transformer-based Models for Long-Form Document Matching: Challenges and Empirical Analysis

no code implementations7 Feb 2023 Akshita Jha, Adithya Samavedhi, Vineeth Rakesh, Jaideep Chandrashekar, Chandan K. Reddy

Firstly, the performance gain provided by transformer-based models comes at a steep cost - both in terms of the required training time and the resource (memory and energy) consumption.

CodeAttack: Code-Based Adversarial Attacks for Pre-trained Programming Language Models

2 code implementations31 May 2022 Akshita Jha, Chandan K. Reddy

Pre-trained programming language (PL) models (such as CodeT5, CodeBERT, GraphCodeBERT, etc.,) have the potential to automate software engineering tasks involving code understanding and code generation.

Code Translation Translation

Supervised Contrastive Learning for Interpretable Long-Form Document Matching

1 code implementation20 Aug 2021 Akshita Jha, Vineeth Rakesh, Jaideep Chandrashekar, Adithya Samavedhi, Chandan K. Reddy

When handling such long documents, there are three primary challenges: (i) the presence of different contexts for the same word throughout the document, (ii) small sections of contextually similar text between two documents, but dissimilar text in the remaining parts (this defies the basic understanding of "similarity"), and (iii) the coarse nature of a single global similarity measure which fails to capture the heterogeneity of the document content.

Contrastive Learning Semantic Text Matching +1

Fair Representation Learning using Interpolation Enabled Disentanglement

no code implementations31 Jul 2021 Akshita Jha, Bhanukiran Vinzamuri, Chandan K. Reddy

In this paper, we propose a novel method to address two key issues: (a) Can we simultaneously learn fair disentangled representations while ensuring the utility of the learned representation for downstream tasks, and (b)Can we provide theoretical insights into when the proposed approach will be both fair and accurate.

Disentanglement Fairness +1

Cannot find the paper you are looking for? You can Submit a new open access paper.