Bemana, MojtabaLeimkühler, ThomasMyszkowski, KarolSeidel, Hans-PeterRitschel, TobiasBousseau, AdrienDay, Angela2025-05-092025-05-0920251467-8659https://doi.org/10.1111/cgf.70086https://diglib.eg.org/handle/10.1111/cgf70086We demonstrate generating HDR images using the concerted action of multiple black-box, pre-trained LDR image diffusion models. Common diffusion models are not HDR as, first, there is no sufficiently large HDR image dataset available to re-train them, and, second, even if it was, re-training such models is impossible for most compute budgets. Instead, we seek inspiration from the HDR image capture literature that traditionally fuses sets of LDR images, called ''exposure brackets'', to produce a single HDR image. We operate multiple denoising processes to generate multiple LDR brackets that together form a valid HDR result. To this end, we introduce a brackets consistency term into the diffusion process to couple the brackets such that they agree across the exposure range they share. We demonstrate HDR versions of state-of-the-art unconditional and conditional as well as restoration-type (LDR2HDR) generative modeling.Attribution 4.0 International LicenseBracket Diffusion: HDR Image Generation by Consistent LDR Denoising10.1111/cgf.7008613 pages