A Controllable Appearance Representation for Flexible Transfer and Editing
dc.contributor.author | Jimenez-Navarro, Santiago | en_US |
dc.contributor.author | Guerrero-Viu, Julia | en_US |
dc.contributor.author | Masia, Belen | en_US |
dc.contributor.editor | Wang, Beibei | en_US |
dc.contributor.editor | Wilkie, Alexander | en_US |
dc.date.accessioned | 2025-06-20T07:49:25Z | |
dc.date.available | 2025-06-20T07:49:25Z | |
dc.date.issued | 2025 | |
dc.description.abstract | We present a method that computes an interpretable representation of material appearance within a highly compact, disentangled latent space. This representation is learned in a self-supervised fashion using a VAE-based model. We train our model with a carefully designed unlabeled dataset, avoiding possible biases induced by human-generated labels. Our model demonstrates strong disentanglement and interpretability by effectively encoding material appearance and illumination, despite the absence of explicit supervision. To showcase the capabilities of such a representation, we leverage it for two proof-of-concept applications: image-based appearance transfer and editing. Our representation is used to condition a diffusion pipeline that transfers the appearance of one or more images onto a target geometry, and allows the user to further edit the resulting appearance. This approach offers fine-grained control over the generated results: thanks to the well-structured compact latent space, users can intuitively manipulate attributes such as hue or glossiness in image space to achieve the desired final appearance. | en_US |
dc.description.sectionheaders | Appearance Modelling | |
dc.description.seriesinformation | Eurographics Symposium on Rendering | |
dc.identifier.doi | 10.2312/sr.20251187 | |
dc.identifier.isbn | 978-3-03868-292-9 | |
dc.identifier.issn | 1727-3463 | |
dc.identifier.pages | 13 pages | |
dc.identifier.uri | https://doi.org/10.2312/sr.20251187 | |
dc.identifier.uri | https://diglib.eg.org/handle/10.2312/sr20251187 | |
dc.publisher | The Eurographics Association | en_US |
dc.rights | Attribution 4.0 International License | |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | |
dc.subject | CCS Concepts: Computing methodologies -> Appearance and texture representations; Latent representations; material appearance; self-supervised learning | |
dc.subject | Computing methodologies | |
dc.subject | Appearance and texture representations | |
dc.subject | Latent representations | |
dc.subject | material appearance | |
dc.subject | self | |
dc.subject | supervised learning | |
dc.title | A Controllable Appearance Representation for Flexible Transfer and Editing | en_US |