Marco, JulioSerrano, AnaJarabo, AdrianMasia, BelenGutierrez, DiegoReinhard Klein and Holly Rushmeier2017-09-212017-09-212017978-3-03868-035-22309-5059https://doi.org/10.2312/mam.20171324https://diglib.eg.org:443/handle/10.2312/mam20171324Computer-generated imagery is ubiquitous, spanning fields such as games and movies, architecture, engineering, or virtual prototyping, while also helping create novel ones such as computational materials. With the increase in computational power and the improvement of acquisition techniques, there has been a paradigm shift in the field towards data-driven techniques, which has yielded an unprecedented level of realism in visual appearance. Unfortunately, this leads to a series of problems. First, there is a disconnect between the mathematical representation of the data and any meaningful parameters that humans understand; the captured data is machine-friendly, but not human-friendly. Second, the many different acquisition systems lead to heterogeneous formats and very large datasets. And third, real-world appearance functions are usually nonlinear and high-dimensional. As a result, visual appearance datasets are increasingly unfit to editing operations, which limits the creative process for scientists, engineers, artists, and practitioners in general. There is an immense gap between the complexity, realism and richness of the captured data, and the flexibility to edit such data. The current research path leads to a fragmented space of isolated solutions, each tailored to a particular dataset and problem. To define intuitive and predictable editing spaces, algorithms, and workflows, we must investigate at the theoretical, algorithmic and application levels, putting the user at the core, learning key relevant appearance features in terms humans understand.Intuitive Editing of Visual Appearance from Real-World Datasets10.2312/mam.201713249-10