Search Results for author: Melissa Hall

Found 12 papers, 5 papers with code

Improving Text-to-Image Consistency via Automatic Prompt Optimization

no code implementations26 Mar 2024 Oscar Mañas, Pietro Astolfi, Melissa Hall, Candace Ross, Jack Urbanek, Adina Williams, Aishwarya Agrawal, Adriana Romero-Soriano, Michal Drozdzal

In this paper, we address these challenges and introduce a T2I optimization-by-prompting framework, OPT2I, which leverages a large language model (LLM) to improve prompt-image consistency in T2I models.

Language Modelling Large Language Model

Quantifying and mitigating the impact of label errors on model disparity metrics

no code implementations4 Oct 2023 Julius Adebayo, Melissa Hall, Bowen Yu, Bobbie Chern

We empirically assess the proposed approach on a variety of datasets and find significant improvement, compared to alternative approaches, in identifying training inputs that improve a model's disparity metric.

VPA: Fully Test-Time Visual Prompt Adaptation

no code implementations26 Sep 2023 Jiachen Sun, Mark Ibrahim, Melissa Hall, Ivan Evtimov, Z. Morley Mao, Cristian Canton Ferrer, Caner Hazirbas

Inspired by the success of textual prompting, several studies have investigated the efficacy of visual prompt tuning.

Pseudo Label Test-time Adaptation +3

FACET: Fairness in Computer Vision Evaluation Benchmark

no code implementations ICCV 2023 Laura Gustafson, Chloe Rolland, Nikhila Ravi, Quentin Duval, Aaron Adcock, Cheng-Yang Fu, Melissa Hall, Candace Ross

We present a new benchmark named FACET (FAirness in Computer Vision EvaluaTion), a large, publicly available evaluation set of 32k images for some of the most common vision tasks - image classification, object detection and segmentation.

Fairness Image Classification +3

DIG In: Evaluating Disparities in Image Generations with Indicators for Geographic Diversity

1 code implementation11 Aug 2023 Melissa Hall, Candace Ross, Adina Williams, Nicolas Carion, Michal Drozdzal, Adriana Romero Soriano

The unprecedented photorealistic results achieved by recent text-to-image generative systems and their increasing use as plug-and-play content creation solutions make it crucial to understand their potential biases.

Benchmarking Image Generation

Pinpointing Why Object Recognition Performance Degrades Across Income Levels and Geographies

1 code implementation11 Apr 2023 Laura Gustafson, Megan Richards, Melissa Hall, Caner Hazirbas, Diane Bouchacourt, Mark Ibrahim

As an example, we show that mitigating a model's vulnerability to texture can improve performance on the lower income level.

Object Recognition

Towards Reliable Assessments of Demographic Disparities in Multi-Label Image Classifiers

no code implementations16 Feb 2023 Melissa Hall, Bobbie Chern, Laura Gustafson, Denisse Ventura, Harshad Kulkarni, Candace Ross, Nicolas Usunier

These metrics successfully incentivized performance improvements on person-centric tasks such as face analysis and are used to understand risks of modern models.

Fairness Multi-Label Image Classification +1

Vision-Language Models Performing Zero-Shot Tasks Exhibit Gender-based Disparities

no code implementations26 Jan 2023 Melissa Hall, Laura Gustafson, Aaron Adcock, Ishan Misra, Candace Ross

With these capabilities in mind, we ask: Do vision-language models exhibit gender bias when performing zero-shot image classification, object detection and semantic segmentation?

Image Classification object-detection +4

"I'm sorry to hear that": Finding New Biases in Language Models with a Holistic Descriptor Dataset

2 code implementations18 May 2022 Eric Michael Smith, Melissa Hall, Melanie Kambadur, Eleonora Presani, Adina Williams

As language models grow in popularity, it becomes increasingly important to clearly measure all possible markers of demographic identity in order to avoid perpetuating existing societal harms.

Sentence

Understanding out-of-distribution accuracies through quantifying difficulty of test samples

no code implementations28 Mar 2022 Berfin Simsek, Melissa Hall, Levent Sagun

Existing works show that although modern neural networks achieve remarkable generalization performance on the in-distribution (ID) dataset, the accuracy drops significantly on the out-of-distribution (OOD) datasets \cite{recht2018cifar, recht2019imagenet}.

A Systematic Study of Bias Amplification

1 code implementation27 Jan 2022 Melissa Hall, Laurens van der Maaten, Laura Gustafson, Maxwell Jones, Aaron Adcock

To enable this study, we design a simple image-classification problem in which we can tightly control (synthetic) biases.

BIG-bench Machine Learning Image Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.