Zhao, YayanLi, MingweiBerger, MatthewAigner, WolfgangArchambault, DanielBujack, Roxana2024-05-212024-05-2120241467-8659https://doi.org/10.1111/cgf.15086https://diglib.eg.org/handle/10.1111/cgf15086We present CUPID: a visualization method for the contextual understanding of prompt-conditioned image distributions. CUPID targets the visual analysis of distributions produced by modern text-to-image generative models, wherein a user can specify a scene via natural language, and the model generates a set of images, each intended to satisfy the user's description. CUPID is designed to help understand the resulting distribution, using contextual cues to facilitate analysis: objects mentioned in the prompt, novel, synthesized objects not explicitly mentioned, and their potential relationships. Central to CUPID is a novel method for visualizing high-dimensional distributions, wherein contextualized embeddings of objects, those found within images, are mapped to a low-dimensional space via density-based embeddings. We show how such embeddings allows one to discover salient styles of objects within a distribution, as well as identify anomalous, or rare, object styles. Moreover, we introduce conditional density embeddings, whereby conditioning on a given object allows one to compare object dependencies within the distribution. We employ CUPID for analyzing image distributions produced by large-scale diffusion models, where our experimental results offer insights on language misunderstanding from such models and biases in object composition, while also providing an interface for discovery of typical, or rare, synthesized scenes.Attribution 4.0 International LicenseCCS Concepts: Human-centered computing → Visualization techniques; Visual analytics; Visualization theory, concepts and paradigmsHuman centered computing → Visualization techniquesVisual analyticsVisualization theoryconcepts and paradigmsCUPID: Contextual Understanding of Prompt-conditioned Image Distributions10.1111/cgf.1508612 pages