Bemana, MojtabaKeinert, JoachimMyszkowski, KarolBätz, MichelZiegler, MatthiasSeidel, Hans-PeterRitschel, TobiasLee, Jehee and Theobalt, Christian and Wetzstein, Gordon2019-10-142019-10-1420191467-8659https://doi.org/10.1111/cgf.13862https://diglib.eg.org:443/handle/10.1111/cgf13862Image metrics predict the perceived per-pixel difference between a reference image and its degraded (e. g., re-rendered) version. In several important applications, the reference image is not available and image metrics cannot be applied. We devise a neural network architecture and training procedure that allows predicting the MSE, SSIM or VGG16 image difference from the distorted image alone while the reference is not observed. This is enabled by two insights: The first is to inject sufficiently many un-distorted natural image patches, which can be found in arbitrary amounts and are known to have no perceivable difference to themselves. This avoids false positives. The second is to balance the learning, where it is carefully made sure that all image errors are equally likely, avoiding false negatives. Surprisingly, we observe that the resulting no-reference metric, subjectively, can even perform better than the reference-based one, as it had to become robust against mis-alignments. We evaluate the effectiveness of our approach in an image-based rendering context, both quantitatively and qualitatively. Finally, we demonstrate two applications which reduce light field capture time and provide guidance for interactive depth adjustment.Learning to Predict Image-based Rendering Artifacts with Respect to a Hidden Reference Image10.1111/cgf.13862579-589