Riso, MarziaVecchio, GiuseppePellacini, FabioComino Trinidad, MarcMancinelli, ClaudioMaggioli, FilippoRomanengo, ChiaraCabiddu, DanielaGiorgi, Daniela2025-11-212025-11-212025978-3-03868-296-72617-4855https://doi.org/10.2312/stag.20251330https://diglib.eg.org/handle/10.2312/stag20251330Recent advances in diffusion models have significantly improved the synthesis of materials, textures, and 3D shapes. By conditioning these models on text or images, users can guide the generation, reducing the time required to create digital assets. In this paper, we address the synthesis of structured, stationary patterns, where diffusion models are generally less reliable and, more importantly, less controllable. Our approach leverages the generative capabilities of diffusion models specifically adapted to the pattern domain. It enables users to exercise direct control over the synthesis by expanding a partially hand-drawn pattern into a larger design while preserving the structure and details of the input. To enhance pattern quality, we fine-tune an image-pretrained diffusion model on structured patterns using Low-Rank Adaptation (LoRA), apply a noise rolling technique to ensure tileability, and utilize a patch-based approach to facilitate the generation of large-scale assets. We demonstrate the effectiveness of our method through a comprehensive set of experiments, showing that it outperforms existing models in generating diverse, consistent patterns that respond directly to user input. Code will be released at publication time at: https://github.com/marzia-riso/structured_pattern_expansion.Attribution 4.0 International LicenseCCS Concepts: Computing methodologies → TexturingComputing methodologies → TexturingStructured Pattern Expansion with Diffusion Models10.2312/stag.2025133012 pages