Search Results for author: Sarah Laszlo

Found 5 papers, 0 papers with code

Harm Amplification in Text-to-Image Models

no code implementations1 Feb 2024 Susan Hao, Renee Shelby, Yuchi Liu, Hansa Srinivasan, Mukul Bhutani, Burcu Karagol Ayan, Shivani Poddar, Sarah Laszlo

Text-to-image (T2I) models have emerged as a significant advancement in generative AI; however, there exist safety concerns regarding their potential to produce harmful image outputs even when users input seemingly safe prompts.

ViSAGe: A Global-Scale Analysis of Visual Stereotypes in Text-to-Image Generation

no code implementations12 Jan 2024 Akshita Jha, Vinodkumar Prabhakaran, Remi Denton, Sarah Laszlo, Shachi Dave, Rida Qadri, Chandan K. Reddy, Sunipa Dev

First, we show that stereotypical attributes in ViSAGe are thrice as likely to be present in generated images of corresponding identities as compared to other attributes, and that the offensiveness of these depictions is especially higher for identities from Africa, South America, and South East Asia.

Text-to-Image Generation

"Is a picture of a bird a bird": Policy recommendations for dealing with ambiguity in machine vision models

no code implementations27 Jun 2023 Alicia Parrish, Sarah Laszlo, Lora Aroyo

Many questions that we ask about the world do not have a single clear answer, yet typical human annotation set-ups in machine learning assume there must be a single ground truth label for all examples in every task.

Safety and Fairness for Content Moderation in Generative Models

no code implementations9 Jun 2023 Susan Hao, Piyush Kumar, Sarah Laszlo, Shivani Poddar, Bhaktipriya Radharapu, Renee Shelby

With significant advances in generative AI, new technologies are rapidly being deployed with generative components.

Fairness

Imagen Editor and EditBench: Advancing and Evaluating Text-Guided Image Inpainting

no code implementations CVPR 2023 Su Wang, Chitwan Saharia, Ceslee Montgomery, Jordi Pont-Tuset, Shai Noy, Stefano Pellegrini, Yasumasa Onoe, Sarah Laszlo, David J. Fleet, Radu Soricut, Jason Baldridge, Mohammad Norouzi, Peter Anderson, William Chan

Through extensive human evaluation on EditBench, we find that object-masking during training leads to across-the-board improvements in text-image alignment -- such that Imagen Editor is preferred over DALL-E 2 and Stable Diffusion -- and, as a cohort, these models are better at object-rendering than text-rendering, and handle material/color/size attributes better than count/shape attributes.

Image Inpainting Object +1

Cannot find the paper you are looking for? You can Submit a new open access paper.