Zeng, ZhengLiu, ShiqiuYang, JingleiWang, LuYan, Ling-QiMitra, Niloy and Viola, Ivan2021-04-092021-04-0920211467-8659https://doi.org/10.1111/cgf.142616https://diglib.eg.org:443/handle/10.1111/cgf142616Real-time ray tracing (RTRT) is being pervasively applied. The key to RTRT is a reliable denoising scheme that reconstructs clean images from significantly undersampled noisy inputs, usually at 1 sample per pixel as limited by current hardware's computing power. The state of the art reconstruction methods all rely on temporal filtering to find correspondences of current pixels in the previous frame, described using per-pixel screen-space motion vectors. While these approaches are demonstrated powerful, they suffer from a common issue that the temporal information cannot be used when the motion vectors are not valid, i.e. when temporal correspondences are not obviously available or do not exist in theory. We introduce temporally reliable motion vectors that aim at deeper exploration of temporal coherence, especially for the generally-believed difficult applications on shadows, glossy reflections and occlusions, with the key idea to detect and track the cause of each effect. We show that our temporally reliable motion vectors produce significantly better temporal results on a variety of dynamic scenes when compared to the state of the art methods, but with negligible performance overhead.Computing methodologiesRenderingRay tracingTemporally Reliable Motion Vectors for Real-time Ray Tracing10.1111/cgf.14261679-90