Unified Neural Encoding of BTFs

dc.contributor.authorRainer, Gillesen_US
dc.contributor.authorGhosh, Abhijeeten_US
dc.contributor.authorJakob, Wenzelen_US
dc.contributor.authorWeyrich, Timen_US
dc.contributor.editorPanozzo, Daniele and Assarsson, Ulfen_US
dc.date.accessioned2020-05-24T12:51:24Z
dc.date.available2020-05-24T12:51:24Z
dc.date.issued2020
dc.description.abstractRealistic rendering using discrete reflectance measurements is challenging, because arbitrary directions on the light and view hemispheres are queried at render time, incurring large memory requirements and the need for interpolation. This explains the desire for compact and continuously parametrized models akin to analytic BRDFs; however, fitting BRDF parameters to complex data such as BTF texels can prove challenging, as models tend to describe restricted function spaces that cannot encompass real-world behavior. Recent advances in this area have increasingly relied on neural representations that are trained to reproduce acquired reflectance data. The associated training process is extremely costly and must typically be repeated for each material. Inspired by autoencoders, we propose a unified network architecture that is trained on a variety of materials, and which projects reflectance measurements to a shared latent parameter space. Similarly to SVBRDF fitting, real-world materials are represented by parameter maps, and the decoder network is analog to the analytic BRDF expression (also parametrized on light and view directions for practical rendering application). With this approach, encoding and decoding materials becomes a simple matter of evaluating the network. We train and validate on BTF datasets of the University of Bonn, but there are no prerequisites on either the number of angular reflectance samples, or the sample positions. Additionally, we show that the latent space is well-behaved and can be sampled from, for applications such as mipmapping and texture synthesis.en_US
dc.description.number2
dc.description.sectionheadersDeep Learning for Rendering
dc.description.seriesinformationComputer Graphics Forum
dc.description.volume39
dc.identifier.doi10.1111/cgf.13921
dc.identifier.issn1467-8659
dc.identifier.pages167-178
dc.identifier.urihttps://doi.org/10.1111/cgf.13921
dc.identifier.urihttps://diglib.eg.org:443/handle/10.1111/cgf13921
dc.publisherThe Eurographics Association and John Wiley & Sons Ltd.en_US
dc.rightsAttribution 4.0 International License
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.subjectComputer Graphics
dc.subjectRendering
dc.subjectMaterial Appearance
dc.subjectBTFs and Neural Models
dc.titleUnified Neural Encoding of BTFsen_US
Files
Original bundle
Now showing 1 - 5 of 10
Loading...
Thumbnail Image
Name:
v39i2pp167-178_lowres.pdf
Size:
1.97 MB
Format:
Adobe Portable Document Format
Description:
Lowres Version
Loading...
Thumbnail Image
Name:
v39i2pp167-178.pdf
Size:
27.45 MB
Format:
Adobe Portable Document Format
No Thumbnail Available
Name:
carpet12-video.mp4
Size:
756.35 KB
Format:
Unknown data format
No Thumbnail Available
Name:
fabric12-video.mp4
Size:
736.77 KB
Format:
Unknown data format
No Thumbnail Available
Name:
felt12-video.mp4
Size:
706.15 KB
Format:
Unknown data format
Collections