Liu, JiachengZhou, HangWei, ShidaMa, RuiChen, RenjieRitschel, TobiasWhiting, Emily2024-10-132024-10-1320241467-8659https://doi.org/10.1111/cgf.15246https://diglib.eg.org/handle/10.1111/cgf15246In this paper, we address the problem of plausible object placement for the challenging task of realistic image composition. We propose DiffPop, the first framework that utilizes plausibility-guided denoising diffusion probabilistic model to learn the scale and spatial relations among multiple objects and the corresponding scene image. First, we train an unguided diffusion model to directly learn the object placement parameters in a self-supervised manner. Then, we develop a human-in-the-loop pipeline which exploits human labeling on the diffusion-generated composite images to provide the weak supervision for training a structural plausibility classifier. The classifier is further used to guide the diffusion sampling process towards generating the plausible object placement. Experimental results verify the superiority of our method for producing plausible and diverse composite images on the new Cityscapes-OP dataset and the public OPA dataset, as well as demonstrate its potential in applications such as data augmentation and multi-object placement tasks. Our dataset and code will be released.Attribution 4.0 International LicenseCCS Concepts: Computing methodologies → Image manipulation; Computer visionComputing methodologies → Image manipulationComputer visionDiffPop: Plausibility-Guided Object Placement Diffusion for Image Composition10.1111/cgf.1524612 pages