Kim, Sang MinChoi, ChangwoonHeo, HyeongjunKim, Young MinChaine, RaphaƫlleDeng, ZhigangKim, Min H.2023-10-092023-10-0920231467-8659https://doi.org/10.1111/cgf.14931https://diglib.eg.org:443/handle/10.1111/cgf14931The advancements of the Neural Radiance Field (NeRF) and its variants have demonstrated remarkable capabilities in generating photo-realistic novel views from a small set of input images. While recent works suggest various techniques and model architectures that enhance speed or reconstruction quality, little attention is paid to exploring the RGB color space of input images. In this paper, we propose a universal color transform module that can maximally harness the captured evidence for the neural networks at hand. The color transform module utilizes an encoder-decoder framework that maps the RGB color space into a new latent space, enhancing the expressiveness of the input domain. We attach the encoder and the decoder at the input and output of a NeRF model of choice, respectively, and jointly optimize them to maintain the cycle consistency of the proposed transform, in addition to minimizing the reconstruction errors in the feature domain. Our comprehensive experiments demonstrate that the learned color space can significantly improve the quality of reconstructions compared to the conventional RGB representation. Its benefits are particularly pronounced in challenging scenarios characterized by low-light environments and scenes with low-textured regions. The proposed color transform pushes the boundaries of limitations in the input domain and offers a promising avenue for advancing the reconstruction capabilities of various neural representations. Source code is available at https://github.com/sangminkim-99/ColorTransformModule.CCS Concepts: Computing methodologies -> Reconstruction; RenderingComputing methodologiesReconstructionRenderingRobust Novel View Synthesis with Color Transform Module10.1111/cgf.1493114 pages