Zhu, YuezeMitra, Niloy J.Ceylan, DuyguLi, Tzu-Mao2025-05-092025-05-092025978-3-03868-268-41017-4656https://doi.org/10.2312/egs.20251044https://diglib.eg.org/handle/10.2312/egs20251044Diffusion models have shown remarkable abilities in generating realistic images. Unfortunately, diffusion processes do not directly produce diverse samples. Recent work has addressed this problem by applying a joint-particle time-evolving potential force that encourages varied and distinct generations. However, such a method focuses on improving the diversity across any batch of generation rather than producing variations of a specific sample. In this paper, we propose a method for creating subtle variations of a single (generated) image - specifically, we propose Single Sample Refinement, a simple and training-free method to improve the diversity of one specific sample at different levels of variability. This mode is useful for creative content generation, allowing users to explore controlled variations without sacrificing the identity of the main objects.Attribution 4.0 International LicenseControlled Image Variability via Diffusion Processes10.2312/egs.202510444 pages