Yi, ShinyoungJeon, Daniel S.Serrano, AnaJeong, Se‐YoonKim, Hui‐YongGutierrez, DiegoKim, Min H.Hauser, Helwig and Alliez, Pierre2022-03-252022-03-2520221467-8659https://doi.org/10.1111/cgf.14439https://diglib.eg.org:443/handle/10.1111/cgf14439Despite advances in display technology, many existing applications rely on psychophysical datasets of human perception gathered using older, sometimes outdated displays. As a result, there exists the underlying assumption that such measurements can be carried over to the new viewing conditions of more modern technology. We have conducted a series of psychophysical experiments to explore contrast sensitivity using a state‐of‐the‐art HDR display, taking into account not only the spatial frequency and luminance of the stimuli but also their surrounding luminance levels. From our data, we have derived a novel surround‐aware contrast sensitivity function (CSF), which predicts human contrast sensitivity more accurately. We additionally provide a practical version that retains the benefits of our full model, while enabling easy backward compatibility and consistently producing good results across many existing applications that make use of CSF models. We show examples of effective HDR video compression using a transfer function derived from our CSF, tone‐mapping and improved accuracy in visual difference prediction.computational photographyimage and video processinghigh dynamic range/tone mappingModelling Surround‐aware Contrast Sensitivity for HDR Displays10.1111/cgf.14439350-363