Jindal, AkshaySadaka, NabilThomas, Manu MathewSochenov, AntonKaplanyan, AntonKnoll, AaronPeters, Christoph2025-06-202025-06-2020251467-8659https://doi.org/10.1111/cgf.70221https://diglib.eg.org/handle/10.1111/cgf70221While existing video and image quality datasets have extensively studied natural videos and traditional distortions, the perception of synthetic content and modern rendering artifacts remains underexplored. We present a novel video quality dataset focused on distortions introduced by advanced rendering techniques, including neural supersampling, novel-view synthesis, path tracing, neural denoising, frame interpolation, and variable rate shading. Our evaluations show that existing full-reference quality metrics perform sub-optimally on these distortions, with a maximum Pearson correlation of 0.78. Additionally, we find that the feature space of pre-trained 3D CNNs aligns strongly with human perception of visual quality. We propose CGVQM, a full-reference video quality metric that significantly outperforms existing metrics while generating both per-pixel error maps and global quality scores. Our dataset and metric implementation is available at https://github.com/IntelLabs/CGVQM.Attribution 4.0 International LicenseCCS Concepts: Computing methodologies → PerceptionComputing methodologies → PerceptionCGVQM+D: Computer Graphics Video Quality Metric and Dataset10.1111/cgf.7022116 pages