no code implementations • 26 Mar 2024 • Oscar Mañas, Pietro Astolfi, Melissa Hall, Candace Ross, Jack Urbanek, Adina Williams, Aishwarya Agrawal, Adriana Romero-Soriano, Michal Drozdzal
In this paper, we address these challenges and introduce a T2I optimization-by-prompting framework, OPT2I, which leverages a large language model (LLM) to improve prompt-image consistency in T2I models.
no code implementations • 4 Oct 2023 • Julius Adebayo, Melissa Hall, Bowen Yu, Bobbie Chern
We empirically assess the proposed approach on a variety of datasets and find significant improvement, compared to alternative approaches, in identifying training inputs that improve a model's disparity metric.
no code implementations • 26 Sep 2023 • Jiachen Sun, Mark Ibrahim, Melissa Hall, Ivan Evtimov, Z. Morley Mao, Cristian Canton Ferrer, Caner Hazirbas
Inspired by the success of textual prompting, several studies have investigated the efficacy of visual prompt tuning.
no code implementations • ICCV 2023 • Laura Gustafson, Chloe Rolland, Nikhila Ravi, Quentin Duval, Aaron Adcock, Cheng-Yang Fu, Melissa Hall, Candace Ross
We present a new benchmark named FACET (FAirness in Computer Vision EvaluaTion), a large, publicly available evaluation set of 32k images for some of the most common vision tasks - image classification, object detection and segmentation.
1 code implementation • 11 Aug 2023 • Melissa Hall, Candace Ross, Adina Williams, Nicolas Carion, Michal Drozdzal, Adriana Romero Soriano
The unprecedented photorealistic results achieved by recent text-to-image generative systems and their increasing use as plug-and-play content creation solutions make it crucial to understand their potential biases.
1 code implementation • 11 Apr 2023 • Laura Gustafson, Megan Richards, Melissa Hall, Caner Hazirbas, Diane Bouchacourt, Mark Ibrahim
As an example, we show that mitigating a model's vulnerability to texture can improve performance on the lower income level.
no code implementations • 16 Feb 2023 • Melissa Hall, Bobbie Chern, Laura Gustafson, Denisse Ventura, Harshad Kulkarni, Candace Ross, Nicolas Usunier
These metrics successfully incentivized performance improvements on person-centric tasks such as face analysis and are used to understand risks of modern models.
no code implementations • 26 Jan 2023 • Melissa Hall, Laura Gustafson, Aaron Adcock, Ishan Misra, Candace Ross
With these capabilities in mind, we ask: Do vision-language models exhibit gender bias when performing zero-shot image classification, object detection and semantic segmentation?
2 code implementations • 18 May 2022 • Eric Michael Smith, Melissa Hall, Melanie Kambadur, Eleonora Presani, Adina Williams
As language models grow in popularity, it becomes increasingly important to clearly measure all possible markers of demographic identity in order to avoid perpetuating existing societal harms.
no code implementations • 28 Mar 2022 • Berfin Simsek, Melissa Hall, Levent Sagun
Existing works show that although modern neural networks achieve remarkable generalization performance on the in-distribution (ID) dataset, the accuracy drops significantly on the out-of-distribution (OOD) datasets \cite{recht2018cifar, recht2019imagenet}.
1 code implementation • 27 Jan 2022 • Melissa Hall, Laurens van der Maaten, Laura Gustafson, Maxwell Jones, Aaron Adcock
To enable this study, we design a simple image-classification problem in which we can tightly control (synthetic) biases.
1 code implementation • 10 Mar 2021 • Chloé Bakalar, Renata Barreto, Stevie Bergman, Miranda Bogen, Bobbie Chern, Sam Corbett-Davies, Melissa Hall, Isabel Kloumann, Michelle Lam, Joaquin Quiñonero Candela, Manish Raghavan, Joshua Simons, Jonathan Tannen, Edmund Tong, Kate Vredenburgh, Jiejing Zhao
We discuss how we disentangle normative questions of product and policy design (like, "how should the system trade off between different stakeholders' interests and needs?")