Training Priors Predict Text-To-Image Model Performance

23 May 2023  ·  Charles Lovering, Ellie Pavlick ·

Text-to-image models can often generate some relations, i.e., "astronaut riding horse", but fail to generate other relations composed of the same basic parts, i.e., "horse riding astronaut". These failures are often taken as evidence that models rely on training priors rather than constructing novel images compositionally. This paper tests this intuition on the stablediffusion 2.1 text-to-image model. By looking at the subject-verb-object (SVO) triads that underlie these prompts (e.g., "astronaut", "ride", "horse"), we find that the more often an SVO triad appears in the training data, the better the model can generate an image aligned with that triad. Here, by aligned we mean that each of the terms appears in the generated image in the proper relation to each other. Surprisingly, this increased frequency also diminishes how well the model can generate an image aligned with the flipped triad. For example, if "astronaut riding horse" appears frequently in the training data, the image for "horse riding astronaut" will tend to be poorly aligned. Our results thus show that current models are biased to generate images with relations seen in training, and provide new data to the ongoing debate on whether these text-to-image models employ abstract compositional structure in a traditional sense, or rather, interpolate between relations explicitly seen in the training data.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods