Search Results for author: Ranjita Naik

Found 8 papers, 3 papers with code

GATE X-E : A Challenge Set for Gender-Fair Translations from Weakly-Gendered Languages

no code implementations22 Feb 2024 Spencer Rarrick, Ranjita Naik, Sundar Poudel, Vishal Chowdhary

Neural Machine Translation (NMT) continues to improve in quality and adoption, yet the inadvertent perpetuation of gender bias remains a significant concern.

Machine Translation NMT +2

Reducing Gender Bias in Machine Translation through Counterfactual Data Generation

no code implementations27 Nov 2023 Ranjita Naik, Spencer Rarrick, Vishal Chowdhary

By using this data to fine-tune an existing NMT model, they show that gender bias can be significantly mitigated, albeit at the expense of translation quality due to catastrophic forgetting.

counterfactual Domain Adaptation +3

Evaluating Gender Bias in the Translation of Gender-Neutral Languages into English

no code implementations15 Nov 2023 Spencer Rarrick, Ranjita Naik, Sundar Poudel, Vishal Chowdhary

To address this gap, we introduce GATE X-E, an extension to the GATE (Rarrick et al., 2023) corpus, that consists of human translations from Turkish, Hungarian, Finnish, and Persian into English.

Machine Translation Sentence +1

KITAB: Evaluating LLMs on Constraint Satisfaction for Information Retrieval

1 code implementation24 Oct 2023 Marah I Abdin, Suriya Gunasekar, Varun Chandrasekaran, Jerry Li, Mert Yuksekgonul, Rahee Ghosh Peshawaria, Ranjita Naik, Besmira Nushi

Motivated by rising concerns around factual incorrectness and hallucinations of LLMs, we present KITAB, a new dataset for measuring constraint satisfaction abilities of language models.

Information Retrieval Retrieval

Diversity of Thought Improves Reasoning Abilities of LLMs

no code implementations11 Oct 2023 Ranjita Naik, Varun Chandrasekaran, Mert Yuksekgonul, Hamid Palangi, Besmira Nushi

Large language models (LLMs) are documented to struggle in settings that require complex reasoning.

Attention Satisfies: A Constraint-Satisfaction Lens on Factual Errors of Language Models

1 code implementation26 Sep 2023 Mert Yuksekgonul, Varun Chandrasekaran, Erik Jones, Suriya Gunasekar, Ranjita Naik, Hamid Palangi, Ece Kamar, Besmira Nushi

We investigate the internal behavior of Transformer-based Large Language Models (LLMs) when they generate factually incorrect text.

Social Biases through the Text-to-Image Generation Lens

no code implementations30 Mar 2023 Ranjita Naik, Besmira Nushi

In this paper, we take a multi-dimensional approach to studying and quantifying common social biases as reflected in the generated images, by focusing on how occupations, personality traits, and everyday situations are depicted across representations of (perceived) gender, age, race, and geographical location.

Descriptive Text-to-Image Generation

GATE: A Challenge Set for Gender-Ambiguous Translation Examples

1 code implementation7 Mar 2023 Spencer Rarrick, Ranjita Naik, Varun Mathur, Sundar Poudel, Vishal Chowdhary

Although recent years have brought significant progress in improving translation of unambiguously gendered sentences, translation of ambiguously gendered input remains relatively unexplored.

Machine Translation Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.