Show simple item record

dc.contributor.authorHafner, Daviden_US
dc.contributor.authorWeickert, Joachimen_US
dc.contributor.editorChen, Min and Zhang, Hao (Richard)en_US
dc.date.accessioned2016-03-01T14:13:09Z
dc.date.available2016-03-01T14:13:09Z
dc.date.issued2016en_US
dc.identifier.urihttp://dx.doi.org/10.1111/cgf.12690en_US
dc.description.abstractIn this paper, we present a general variational method for image fusion. In particular, we combine different images of the same subject to a single composite that offers optimal exposedness, saturation and local contrast. Previous research approaches this task by first pre‐computing application‐specific weights based on the input, and then combining these weights with the images to the final composite later on. In contrast, we design our model assumptions directly on the fusion result. To this end, we formulate the output image as a convex combination of the input and incorporate concepts from perceptually inspired contrast enhancement such as a local and non‐linear response. This output‐driven approach is the key to the versatility of our general image fusion model. In this regard, we demonstrate the performance of our fusion scheme with several applications such as exposure fusion, multispectral imaging and decolourization. For all application domains, we conduct thorough validations that illustrate the improvements compared to state‐of‐the‐art approaches that are tailored to the individual tasks. In this paper, we present a general variational method for image fusion. In particular, we combine different images of the same subject to a single composite that offers optimal exposedness, saturation and local contrast. Previous research approaches this task by first pre‐computing application‐specific weights based on the input, and then combining these weights with the images to the final composite later on. In contrast, we design our model assumptions directly on the fusion result. To this end, we formulate the output image as a convex combination of the input and incorporate concepts from perceptually inspired contrast enhancement such as a local and non‐linear response. This output‐driven approach is the key to the versatility of our general image fusion model. In this regard, we demonstrate the performance of our fusion scheme with several applications such as exposure fusion, multispectral imaging and decolourization.en_US
dc.publisherCopyright © 2016 The Eurographics Association and John Wiley & Sons Ltd.en_US
dc.subjectimage fusionen_US
dc.subjectmultispectral imagingen_US
dc.subjectexposure fusionen_US
dc.subjectdecolourizationen_US
dc.subjectcontrasten_US
dc.subjectvariationalen_US
dc.subjectI.3.3 [Computer Graphics]: Picture/Image Generation Display algorithms I.4.3 [Image Processing and Computer Vision]: Enhancement Grayscale manipulation I.4.8 [Image Processing and Computer Vision]: Scene Analysis Coloren_US
dc.subjectPhotometryen_US
dc.subjectSensor Fusionen_US
dc.titleVariational Image Fusion with Optimal Local Contrasten_US
dc.description.seriesinformationComputer Graphics Forumen_US
dc.description.sectionheadersArticlesen_US
dc.description.volume35en_US
dc.description.number1en_US
dc.identifier.doi10.1111/cgf.12690en_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record