Herveau, KillianPiochowiak, MaxDachsbacher, CarstenBikker, JaccoGribble, Christiaan2023-06-252023-06-252023978-3-03868-229-52079-8687https://doi.org/10.2312/hpg.20231134https://diglib.eg.org:443/handle/10.2312/hpg20231134Existing deep learning methods for performing temporal anti aliasing (TAA) in rendering are either closed source or rely on upsampling networks with a large operation count which are expensive to evaluate. We propose a simple deep learning architecture for TAA combining only a few common primitives, easy to assemble and to change for application needs. We use a fully-convolutional neural network architecture with recurrent temporal feedback, motion vectors and depth values as input and show that a simple network can produce satisfactory results. Our architecture template, for which we provide code, introduces a method that adapts to different temporal subpixel offsets for accumulation without increasing the operation count. To this end, convolutional layers cycle through a set of different weights per temporal subpixel offset while their operations remain fixed. We analyze the effect of this method on image quality and present different tradeoffs for adapting the architecture. We show that our simple network performs remarkably better than variance clipping TAA, eliminating both flickering and ghosting without performing upsampling.Attribution 4.0 International LicenseCCS Concepts: Computing methodologies -> Antialiasing; Neural networks; RenderingComputing methodologiesAntialiasingNeural networksRenderingMinimal Convolutional Neural Networks for Temporal Anti Aliasing10.2312/hpg.2023113433-419 pages