Show simple item record

dc.contributor.authorNalbach, Oliveren_US
dc.contributor.authorArabadzhiyska, Elenaen_US
dc.contributor.authorMehta, Dushyanten_US
dc.contributor.authorSeidel, Hans-Peteren_US
dc.contributor.authorRitschel, Tobiasen_US
dc.contributor.editorZwicker, Matthias and Sander, Pedroen_US
dc.date.accessioned2017-06-19T06:50:49Z
dc.date.available2017-06-19T06:50:49Z
dc.date.issued2017
dc.identifier.issn1467-8659
dc.identifier.urihttp://dx.doi.org/10.1111/cgf.13225
dc.identifier.urihttps://diglib.eg.org:443/handle/10.1111/cgf13225
dc.description.abstractIn computer vision, convolutional neural networks (CNNs) achieve unprecedented performance for inverse problems where RGB pixel appearance is mapped to attributes such as positions, normals or reflectance. In computer graphics, screen space shading has boosted the quality of real-time rendering, converting the same kind of attributes of a virtual scene back to appearance, enabling effects like ambient occlusion, indirect light, scattering and many more. In this paper we consider the diagonal problem: synthesizing appearance from given per-pixel attributes using a CNN. The resulting Deep Shading renders screen space effects at competitive quality and speed while not being programmed by human experts but learned from example images.en_US
dc.publisherThe Eurographics Association and John Wiley & Sons Ltd.en_US
dc.subjectComputing methodologies
dc.subject
dc.subject> Neural networks
dc.subjectRendering
dc.subjectRasterization
dc.titleDeep Shading: Convolutional Neural Networks for Screen Space Shadingen_US
dc.description.seriesinformationComputer Graphics Forum
dc.description.sectionheadersLighting and Shading
dc.description.volume36
dc.description.number4
dc.identifier.doi10.1111/cgf.13225
dc.identifier.pages065-078


Files in this item

Thumbnail
Thumbnail
Thumbnail

This item appears in the following Collection(s)

  • 36-Issue 4
    Rendering 2017 - Symposium Proceedings

Show simple item record