Suppan, ChristianChalmers, AndrewZhao, JunhongDoronin, AlexRhee, TaehyunLee, Sung-Hee and Zollmann, Stefanie and Okabe, Makoto and Wünsche, Burkhard2021-10-142021-10-142021978-3-03868-162-5https://doi.org/10.2312/pg.20211385https://diglib.eg.org:443/handle/10.2312/pg20211385Neural rendering is a class of methods that use deep learning to produce novel images of scenes from more limited information than traditional rendering methods. This is useful for information scarce applications like mixed reality or semantic photo synthesis but comes at the cost of control over the final appearance. We introduce the Neural Direct-illumination Renderer (NDR), a neural screen space renderer capable of rendering direct-illumination images of any geometry, with opaque materials, under distant illuminant. The NDR uses screen space buffers describing material, geometry, and illumination as inputs to provide direct control over the output. We introduce the use of intrinsic image decomposition to allow a Convolutional Neural Network (CNN) to learn a mapping from a large number of pixel buffers to rendered images. The NDR predicts shading maps, which are subsequently combined with albedo maps to create a rendered image. We show that the NDR produces plausible images that can be edited by modifying the input maps and marginally outperforms the state of the art while also providing more functionality.Computing methodologiesRenderingNeural networksSupervised learning by regressionNeural Screen Space Rendering of Direct Illumination10.2312/pg.2021138537-42