Deep Residual Combiner: A Learned Fusion of Spatial, Temporal, and Multiscale Correlated Pixel Estimates
Loading...
Date
2026
Journal Title
Journal ISSN
Volume Title
Publisher
The Eurographics Association and John Wiley & Sons Ltd.
Abstract
Correlation-based rendering techniques continue to advance, and efficiently exploiting correlations between pixel estimates has become increasingly important. The deep combiner framework [BHHM20] allows us to fuse independent and correlated pixel estimates but focuses solely on spatial correlations. We propose a generalization of the deep combiner framework, the deep residual combiner, that is designed to exploit correlations across spatial, temporal, and multiscale domains. The deep residual combiner enables robust cross-domain fusion, effectively reducing systematic artifacts and significantly enhancing temporal coherence, both of which are especially important in animation scenarios. We demonstrate the effectiveness of our proposed method through several practical applications, showcasing improvements in temporal stability, visual fidelity, and reduction of residual errors across diverse rendering scenarios compared to prior approaches.
Description
@article{10.1111:cgf.70358,
journal = {Computer Graphics Forum},
title = {{Deep Residual Combiner: A Learned Fusion of Spatial, Temporal, and Multiscale Correlated Pixel Estimates}},
author = {Zhou, Weijie and Hughes, Euan and Hachisuka, Toshiya},
year = {2026},
publisher = {The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {10.1111/cgf.70358}
}
