Show simple item record

dc.contributor.authorLi, Ruien_US
dc.contributor.authorRückert, Dariusen_US
dc.contributor.authorWang, Yuanhaoen_US
dc.contributor.authorIdoughi, Ramzien_US
dc.contributor.authorHeidrich, Wolfgangen_US
dc.contributor.editorBender, Janen_US
dc.contributor.editorBotsch, Marioen_US
dc.contributor.editorKeim, Daniel A.en_US
dc.description.abstractNeural rendering with implicit neural networks has recently emerged as an attractive proposition for scene reconstruction, achieving excellent quality albeit at high computational cost. While the most recent generation of such methods has made progress on the rendering (inference) times, very little progress has been made on improving the reconstruction (training) times. In this work we present Neural Adaptive Scene Tracing (NAScenT ), that directly trains a hybrid explicit-implicit neural representation. NAScenT uses a hierarchical octree representation with one neural network per leaf node and combines this representation with a two-stage sampling process that concentrates ray samples where they matter most - near object surfaces. As a result, NAScenT is capable of reconstructing challenging scenes including both large, sparsely populated volumes like UAV captured outdoor environments, as well as small scenes with high geometric complexity. NAScenT outperforms existing neural rendering approaches in terms of both quality and training time.en_US
dc.publisherThe Eurographics Associationen_US
dc.rightsAttribution 4.0 International License
dc.subjectCCS Concepts: Computing methodologies --> Ray tracing; Image-based rendering
dc.subjectComputing methodologies
dc.subjectRay tracing
dc.subjectbased rendering
dc.titleNeural Adaptive Scene Tracing (NAScenT)en_US
dc.description.seriesinformationVision, Modeling, and Visualization
dc.description.sectionheadersJoint Session
dc.identifier.pages8 pages

Files in this item


This item appears in the following Collection(s)

Show simple item record

Attribution 4.0 International License
Except where otherwise noted, this item's license is described as Attribution 4.0 International License