Sin, Zackary P. T.Ng, Peter H. F.Leong, Hong VaLee, Sung-Hee and Zollmann, Stefanie and Okabe, Makoto and Wünsche, Burkhard2021-10-142021-10-142021978-3-03868-162-5https://doi.org/10.2312/pg.20211384https://diglib.eg.org:443/handle/10.2312/pg20211384Achieving photo-realistic result is an enticing proposition for the computer graphics community. Great progress has been achieved in the past decades, but the cost of human expertise has also grown. Neural rendering is a promising candidate for reducing this cost as it relies on data to construct the scene representation. However, one key component for adapting neural rendering for practical use is currently missing: animation. There seems to be a lack of discussion on how to enable neural rendering works for synthesizing frames for unseen animations. To fill this research gap, we propose neural proxy, a novel neural rendering model that utilizes animatable proxies for representing photo-realistic targets. Via a tactful combination of components from neural volume rendering and neural texture, our model is able to render unseen animations without any temporal learning. Experiment results show that the proposed model significantly outperforms current neural rendering works.Computing methodologiesComputer graphicsMachine learningNeural Proxy: Empowering Neural Volume Rendering for Animation10.2312/pg.2021138431-36