Nikolov, IvanCeylan, DuyguLi, Tzu-Mao2025-05-092025-05-092025978-3-03868-268-41017-4656https://doi.org/10.2312/egs.20251032https://diglib.eg.org/handle/10.2312/egs20251032Correct lighting and shading are vital for pixel art design. Automating texture generation, such as normal, depth, and occlusion maps, has been a long-standing focus. We extend this by proposing a deep learning model that generates point and directional light maps from RGB pixel art sprites and specified light vectors. Our approach modifies a UNet architecture with CIN layers to incorporate positional and directional information, using ZoeDepth for training depth data. Testing on a popular pixel art dataset shows that the generated light maps closely match those from depth or normal maps, as well as from manual programs. The model effectively relights complex sprites across styles and functions in real time, enhancing artist workflows. The code and dataset are here - https://github.com/IvanNik17/light-sprite.Attribution 4.0 International LicenseCCS Concepts: Computing methodologies → Non-photorealistic rendering; Image-based rendering; Theory of computation → Machine learning theoryComputing methodologies → Nonphotorealistic renderingImagebased renderingTheory of computation → Machine learning theoryLight the Sprite: Pixel Art Dynamic Light Map Generation10.2312/egs.202510324 pages