Liu, Xiao LeWu, LeiWang, Chang ShuoDong, PeiMeng, Xiang XuChen, RenjieRitschel, TobiasWhiting, Emily2024-10-132024-10-132024978-3-03868-250-9https://doi.org/10.2312/pg.20241308https://diglib.eg.org/handle/10.2312/pg20241308Scene Text Editing (STE) focuses on replacing text in images while preserving style and background. Existing methods often grapple with simultaneously learning different transformation rules for text and background, especially in complex scenes. This leads to several notable challenges, such as low accuracy in content, ineffective extraction of text styles, and suboptimal background reconstruction. To address these challenges, we introduce SLGDiffuser, a stroke-level guidance diffusion model specifically designed for complex scene text editing. SLGDiffuser features a stroke-level guidance text conversion module that processes target text through character encoding and utilizes ContourLoss with stroke features to improve text accuracy. It also benefits from the proposed stroke-enhanced strategy, which enhances text integrity by leveraging detailed stroke information. Furthermore, we introduce a unified instruction-based background reconstruction module that fine-tunes a pre-trained diffusion model. It enables the application of a standardized instruction prompt to reconstruct a variety of complex scenes effectively. Tested extensively, our model outperforms existing methods across diverse real-world datasets. We release code and model weights at https://github.com/lxlde/SLGDiffuserAttribution 4.0 International LicenseCCS Concepts: Imaging → Image/Video Editing; Image Processing ; Methods and Applications → Artificial IntelligenceImaging → Image/Video EditingImage ProcessingMethods and Applications → Artificial IntelligenceSLGDiffuser : Stroke-level Guidance Diffusion Model for Complex Scene Text Editing10.2312/pg.2024130812 pages