Show simple item record

dc.contributor.authorWang, Chaoen_US
dc.contributor.authorChen, Binen_US
dc.contributor.authorSeidel, Hans-Peteren_US
dc.contributor.authorMyszkowski, Karolen_US
dc.contributor.authorSerrano, Anaen_US
dc.contributor.editorChaine, Raphaëlleen_US
dc.contributor.editorKim, Min H.en_US
dc.date.accessioned2022-04-22T06:26:54Z
dc.date.available2022-04-22T06:26:54Z
dc.date.issued2022
dc.identifier.issn1467-8659
dc.identifier.urihttps://doi.org/10.1111/cgf.14459
dc.identifier.urihttps://diglib.eg.org:443/handle/10.1111/cgf14459
dc.description.abstractHigh Dynamic Range (HDR) content is becoming ubiquitous due to the rapid development of capture technologies. Nevertheless, the dynamic range of common display devices is still limited, therefore tone mapping (TM) remains a key challenge for image visualization. Recent work has demonstrated that neural networks can achieve remarkable performance in this task when compared to traditional methods, however, the quality of the results of these learning-based methods is limited by the training data. Most existing works use as training set a curated selection of best-performing results from existing traditional tone mapping operators (often guided by a quality metric), therefore, the quality of newly generated results is fundamentally limited by the performance of such operators. This quality might be even further limited by the pool of HDR content that is used for training. In this work we propose a learning-based self-supervised tone mapping operator that is trained at test time specifically for each HDR image and does not need any data labeling. The key novelty of our approach is a carefully designed loss function built upon fundamental knowledge on contrast perception that allows for directly comparing the content in the HDR and tone mapped images. We achieve this goal by reformulating classic VGG feature maps into feature contrast maps that normalize local feature differences by their average magnitude in a local neighborhood, allowing our loss to account for contrast masking effects. We perform extensive ablation studies and exploration of parameters and demonstrate that our solution outperforms existing approaches with a single set of fixed parameters, as confirmed by both objective and subjective metrics.en_US
dc.publisherThe Eurographics Association and John Wiley & Sons Ltd.en_US
dc.subjectCCS Concepts: Computing methodologies --> Computational photography; Neural networks; Image processing
dc.subjectComputing methodologies
dc.subjectComputational photography
dc.subjectNeural networks
dc.subjectImage processing
dc.titleLearning a Self-supervised Tone Mapping Operator via Feature Contrast Masking Lossen_US
dc.description.seriesinformationComputer Graphics Forum
dc.description.sectionheadersComputational Photography
dc.description.volume41
dc.description.number2
dc.identifier.doi10.1111/cgf.14459
dc.identifier.pages71-84
dc.identifier.pages14 pages


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record