Delanoy, J.Lagunas, M.Condor, J.Gutierrez, D.Masia, B.Hauser, Helwig and Alliez, Pierre2022-03-252022-03-2520221467-8659https://doi.org/10.1111/cgf.14446https://diglib.eg.org:443/handle/10.1111/cgf14446Single‐image appearance editing is a challenging task, traditionally requiring the estimation of additional scene properties such as geometry or illumination. Moreover, the exact interaction of light, shape and material reflectance that elicits a given perceptual impression is still not well understood. We present an image‐based editing method that allows to modify the material appearance of an object by increasing or decreasing high‐level perceptual attributes, using a single image as input. Our framework relies on a two‐step generative network, where the first step drives the change in appearance and the second produces an image with high‐frequency details. For training, we augment an existing material appearance dataset with perceptual judgements of high‐level attributes, collected through crowd‐sourced experiments, and build upon training strategies that circumvent the cumbersome need for original‐edited image pairs. We demonstrate the editing capabilities of our framework on a variety of inputs, both synthetic and real, using two common perceptual attributes ( and ), and validate the perception of appearance in our edited images through a user study.image processingimage and video processingA Generative Framework for Image‐based Editing of Material Appearance using Perceptual Attributes10.1111/cgf.14446453-464