Qiao, ZhiKanai, TakashiLee, Sung-hee and Zollmann, Stefanie and Okabe, Makoto and Wuensche, Burkhard2020-10-292020-10-292020978-3-03868-120-5https://doi.org/10.2312/pg.20201222https://diglib.eg.org:443/handle/10.2312/pg20201222We present a novel approach for shading photorealistic hair animation, which is the essential visual element for depicting realistic hairs of virtual characters. Our model is able to shade high-quality hairs quickly by extending the conditional Generative Adversarial Networks. Furthermore, our method is much faster than the previous onerous rendering algorithms and produces fewer artifacts than other neural image translation methods. In this work, we provide a novel energy-conserving hair shading model, which retains the vast majority of semi-transparent appearances and exactly produces the interaction with lights of the scene. Our method is effortless to implement, faster and computationally more efficient than previous algorithms.Computing methodologiesImage based renderingNeural networksAn Energy-Conserving Hair Shading Model Based on Neural Style Transfer10.2312/pg.202012221-6