Nalbach, OliverArabadzhiyska, ElenaMehta, DushyantSeidel, Hans-PeterRitschel, TobiasZwicker, Matthias and Sander, Pedro2017-06-192017-06-1920171467-8659https://doi.org/10.1111/cgf.13225https://diglib.eg.org:443/handle/10.1111/cgf13225In computer vision, convolutional neural networks (CNNs) achieve unprecedented performance for inverse problems where RGB pixel appearance is mapped to attributes such as positions, normals or reflectance. In computer graphics, screen space shading has boosted the quality of real-time rendering, converting the same kind of attributes of a virtual scene back to appearance, enabling effects like ambient occlusion, indirect light, scattering and many more. In this paper we consider the diagonal problem: synthesizing appearance from given per-pixel attributes using a CNN. The resulting Deep Shading renders screen space effects at competitive quality and speed while not being programmed by human experts but learned from example images.Computing methodologies> Neural networksRenderingRasterizationDeep Shading: Convolutional Neural Networks for Screen Space Shading10.1111/cgf.13225065-078