35-Issue 1

Permanent URI for this collection

Issue Information

Issue Information ‐ TOC

Articles

A Survey of Geometric Analysis in Cultural Heritage

Pintus, Ruggero
Pal, Kazim
Yang, Ying
Weyrich, Tim
Gobbetti, Enrico
Rushmeier, Holly
Articles

Colour Mapping: A Review of Recent Methods, Extensions and Applications

Faridul, H. Sheikh
Pouli, T.
Chamaret, C.
Stauder, J.
Reinhard, E.
Kuzovkin, D.
Tremeau, A.
Issue Information

Issue Information

Articles

Robust Cardiac Function Assessment in 4D PC‐MRI Data of the Aorta and Pulmonary Artery

Köhler, Benjamin
Preim, Uta
Grothoff, Matthias
Gutberlet, Matthias
Fischbach, Katharina
Preim, Bernhard
Editorial

Editorial

Chen, Min
 , Richard
Articles

Graph‐Based Wavelet Representation of Multi‐Variate Terrain Data

Cioaca, Teodor
Dumitrescu, Bogdan
Stupariu, Mihai‐Sorin
Articles

Real‐Time Rendering Techniques with Hardware Tessellation

Nießner, M.
Keinert, B.
Fisher, M.
Stamminger, M.
Loop, C.
Schäfer, H.
Articles

Fast ANN for High‐Quality Collaborative Filtering

Tsai, Yun‐Ta
Steinberger, Markus
Pająk, Dawid
Pulli, Kari
Articles

A Hierarchical Approach for Regular Centroidal Voronoi Tessellations

Wang, L.
Hétroy‐Wheeler, F.
Boyer, E.
Articles

Variational Image Fusion with Optimal Local Contrast

Hafner, David
Weickert, Joachim
Articles

Anisotropic Strain Limiting for Quadrilateral and Triangular Cloth Meshes

Ma, Guanghui
Ye, Juntao
Li, Jituo
Zhang, Xiaopeng
Articles

Mesh Sequence Morphing

Chen, Xue
Feng, Jieqing
Bechmann, Dominique
Articles

Practical Low‐Cost Recovery of Spectral Power Distributions

Alvarez‐Cortes, Sara
Kunkel, Timo
Masia, Belen
Articles

Mobile Surface Reflectometry

Riviere, J.
Peers, P.
Ghosh, A.
Articles

Environmental Objects for Authoring Procedural Scenes

Grosbellet, Francois
Peytavie, Adrien
Guérin, Éric
Galin, Éric
Mérillou, Stéphane
Benes, Bedrich
Articles

Full 3D Plant Reconstruction via Intrusive Acquisition

Yin, Kangxue
Huang, Hui
Long, Pinxin
Gaissinski, Alexei
Gong, Minglun
Sharf, Andrei
Articles

Autocorrelation Descriptor for Efficient Co‐Alignment of 3D Shape Collections

Averkiou, Melinos
Kim, Vladimir G.
Mitra, Niloy J.
Articles

Continuity and Interpolation Techniques for Computer Graphics

Gonzalez, F.
Patow, G.
Report

Lauren

Articles

State of the Art in Artistic Editing of Appearance, Lighting and Material

Schmidt, Thorsten‐Walther
Pellacini, Fabio
Nowrouzezahrai, Derek
Jarosz, Wojciech
Dachsbacher, Carsten
Articles

Planar Shape Detection and Regularization in Tandem

Oesau, Sven
Lafarge, Florent
Alliez, Pierre
Articles

The State‐of‐the‐Art of Set Visualization

Alsallakh, Bilal
Micallef, Luana
Aigner, Wolfgang
Hauser, Helwig
Miksch, Silvia
Rodgers, Peter
Articles

Projective Blue‐Noise Sampling

Reinert, Bernhard
Ritschel, Tobias
Seidel, Hans‐Peter
Georgiev, Iliyan


BibTeX (35-Issue 1)
                
@article{
10.1111:cgf.12859,
journal = {Computer Graphics Forum}, title = {{
Issue Information ‐ TOC}},
author = {}, year = {
2016},
publisher = {
Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12859}
}
                
@article{
10.1111:cgf.12668,
journal = {Computer Graphics Forum}, title = {{
A Survey of Geometric Analysis in Cultural Heritage}},
author = {
Pintus, Ruggero
and
Pal, Kazim
and
Yang, Ying
and
Weyrich, Tim
and
Gobbetti, Enrico
and
Rushmeier, Holly
}, year = {
2016},
publisher = {
Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12668}
}
                
@article{
10.1111:cgf.12671,
journal = {Computer Graphics Forum}, title = {{
Colour Mapping: A Review of Recent Methods, Extensions and Applications}},
author = {
Faridul, H. Sheikh
and
Pouli, T.
and
Chamaret, C.
and
Stauder, J.
and
Reinhard, E.
and
Kuzovkin, D.
and
Tremeau, A.
}, year = {
2016},
publisher = {
Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12671}
}
                
@article{
10.1111:cgf.12858,
journal = {Computer Graphics Forum}, title = {{
Issue Information}},
author = {}, year = {
2016},
publisher = {
Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12858}
}
                
@article{
10.1111:cgf.12669,
journal = {Computer Graphics Forum}, title = {{
Robust Cardiac Function Assessment in 4D PC‐MRI Data of the Aorta and Pulmonary Artery}},
author = {
Köhler, Benjamin
and
Preim, Uta
and
Grothoff, Matthias
and
Gutberlet, Matthias
and
Fischbach, Katharina
and
Preim, Bernhard
}, year = {
2016},
publisher = {
Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12669}
}
                
@article{
10.1111:cgf.12856,
journal = {Computer Graphics Forum}, title = {{
Editorial}},
author = {
Chen, Min
and
 , Richard
}, year = {
2016},
publisher = {
Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12856}
}
                
@article{
10.1111:cgf.12670,
journal = {Computer Graphics Forum}, title = {{
Graph‐Based Wavelet Representation of Multi‐Variate Terrain Data}},
author = {
Cioaca, Teodor
and
Dumitrescu, Bogdan
and
Stupariu, Mihai‐Sorin
}, year = {
2016},
publisher = {
Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12670}
}
                
@article{
10.1111:cgf.12714,
journal = {Computer Graphics Forum}, title = {{
Real‐Time Rendering Techniques with Hardware Tessellation}},
author = {
Nießner, M.
and
Keinert, B.
and
Fisher, M.
and
Stamminger, M.
and
Loop, C.
and
Schäfer, H.
}, year = {
2016},
publisher = {
Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12714}
}
                
@article{
10.1111:cgf.12715,
journal = {Computer Graphics Forum}, title = {{
Fast ANN for High‐Quality Collaborative Filtering}},
author = {
Tsai, Yun‐Ta
and
Steinberger, Markus
and
Pająk, Dawid
and
Pulli, Kari
}, year = {
2016},
publisher = {
Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12715}
}
                
@article{
10.1111:cgf.12716,
journal = {Computer Graphics Forum}, title = {{
A Hierarchical Approach for Regular Centroidal Voronoi Tessellations}},
author = {
Wang, L.
and
Hétroy‐Wheeler, F.
and
Boyer, E.
}, year = {
2016},
publisher = {
Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12716}
}
                
@article{
10.1111:cgf.12690,
journal = {Computer Graphics Forum}, title = {{
Variational Image Fusion with Optimal Local Contrast}},
author = {
Hafner, David
and
Weickert, Joachim
}, year = {
2016},
publisher = {
Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12690}
}
                
@article{
10.1111:cgf.12689,
journal = {Computer Graphics Forum}, title = {{
Anisotropic Strain Limiting for Quadrilateral and Triangular Cloth Meshes}},
author = {
Ma, Guanghui
and
Ye, Juntao
and
Li, Jituo
and
Zhang, Xiaopeng
}, year = {
2016},
publisher = {
Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12689}
}
                
@article{
10.1111:cgf.12718,
journal = {Computer Graphics Forum}, title = {{
Mesh Sequence Morphing}},
author = {
Chen, Xue
and
Feng, Jieqing
and
Bechmann, Dominique
}, year = {
2016},
publisher = {
Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12718}
}
                
@article{
10.1111:cgf.12717,
journal = {Computer Graphics Forum}, title = {{
Practical Low‐Cost Recovery of Spectral Power Distributions}},
author = {
Alvarez‐Cortes, Sara
and
Kunkel, Timo
and
Masia, Belen
}, year = {
2016},
publisher = {
Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12717}
}
                
@article{
10.1111:cgf.12719,
journal = {Computer Graphics Forum}, title = {{
Mobile Surface Reflectometry}},
author = {
Riviere, J.
and
Peers, P.
and
Ghosh, A.
}, year = {
2016},
publisher = {
Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12719}
}
                
@article{
10.1111:cgf.12726,
journal = {Computer Graphics Forum}, title = {{
Environmental Objects for Authoring Procedural Scenes}},
author = {
Grosbellet, Francois
and
Peytavie, Adrien
and
Guérin, Éric
and
Galin, Éric
and
Mérillou, Stéphane
and
Benes, Bedrich
}, year = {
2016},
publisher = {
Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12726}
}
                
@article{
10.1111:cgf.12724,
journal = {Computer Graphics Forum}, title = {{
Full 3D Plant Reconstruction via Intrusive Acquisition}},
author = {
Yin, Kangxue
and
Huang, Hui
and
Long, Pinxin
and
Gaissinski, Alexei
and
Gong, Minglun
and
Sharf, Andrei
}, year = {
2016},
publisher = {
Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12724}
}
                
@article{
10.1111:cgf.12723,
journal = {Computer Graphics Forum}, title = {{
Autocorrelation Descriptor for Efficient Co‐Alignment of 3D Shape Collections}},
author = {
Averkiou, Melinos
and
Kim, Vladimir G.
and
Mitra, Niloy J.
}, year = {
2016},
publisher = {
Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12723}
}
                
@article{
10.1111:cgf.12727,
journal = {Computer Graphics Forum}, title = {{
Continuity and Interpolation Techniques for Computer Graphics}},
author = {
Gonzalez, F.
and
Patow, G.
}, year = {
2016},
publisher = {
Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12727}
}
                
@article{
10.1111:cgf.12857,
journal = {Computer Graphics Forum}, title = {{
Lauren}},
author = {}, year = {
2016},
publisher = {
Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12857}
}
                
@article{
10.1111:cgf.12721,
journal = {Computer Graphics Forum}, title = {{
State of the Art in Artistic Editing of Appearance, Lighting and Material}},
author = {
Schmidt, Thorsten‐Walther
and
Pellacini, Fabio
and
Nowrouzezahrai, Derek
and
Jarosz, Wojciech
and
Dachsbacher, Carsten
}, year = {
2016},
publisher = {
Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12721}
}
                
@article{
10.1111:cgf.12720,
journal = {Computer Graphics Forum}, title = {{
Planar Shape Detection and Regularization in Tandem}},
author = {
Oesau, Sven
and
Lafarge, Florent
and
Alliez, Pierre
}, year = {
2016},
publisher = {
Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12720}
}
                
@article{
10.1111:cgf.12722,
journal = {Computer Graphics Forum}, title = {{
The State‐of‐the‐Art of Set Visualization}},
author = {
Alsallakh, Bilal
and
Micallef, Luana
and
Aigner, Wolfgang
and
Hauser, Helwig
and
Miksch, Silvia
and
Rodgers, Peter
}, year = {
2016},
publisher = {
Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12722}
}
                
@article{
10.1111:cgf.12725,
journal = {Computer Graphics Forum}, title = {{
Projective Blue‐Noise Sampling}},
author = {
Reinert, Bernhard
and
Ritschel, Tobias
and
Seidel, Hans‐Peter
and
Georgiev, Iliyan
}, year = {
2016},
publisher = {
Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12725}
}

Browse

Recent Submissions

Now showing 1 - 24 of 24
  • Item
    Issue Information ‐ TOC
    (Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Chen, Min and Zhang, Hao (Richard)
  • Item
    A Survey of Geometric Analysis in Cultural Heritage
    (Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Pintus, Ruggero; Pal, Kazim; Yang, Ying; Weyrich, Tim; Gobbetti, Enrico; Rushmeier, Holly; Chen, Min and Zhang, Hao (Richard)
    We present a review of recent techniques for performing geometric analysis in cultural heritage (CH) applications. The survey is aimed at researchers in the areas of computer graphics, computer vision and CH computing, as well as to scholars and practitioners in the CH field. The problems considered include shape perception enhancement, restoration and preservation support, monitoring over time, object interpretation and collection analysis. All of these problems typically rely on an understanding of the structure of the shapes in question at both a local and global level. In this survey, we discuss the different problem forms and review the main solution methods, aided by classification criteria based on the geometric scale at which the analysis is performed and the cardinality of the relationships among object parts exploited during the analysis. We finalize the report by discussing open problems and future perspectives.We present a review of recent techniques for performing geometric analysis in cultural heritage (CH) applications. The survey is aimed at researchers in the areas of computer graphics, computer vision and CH computing, as well as to scholars and practitioners in the CH field. The problems considered include shape perception enhancement, restoration and preservation support, monitoring over time, object interpretation and collection analysis. All of these problems typically rely on an understanding of the structure of the shapes in question at both a local and global level. In this survey, we discuss the different problem forms and review the main solution methods, aided by classification criteria based on the geometric scale at which the analysis is performed and the cardinality of the relationships among object parts exploited during the analysis. We finalize the report by discussing open problems and future perspectives.
  • Item
    Colour Mapping: A Review of Recent Methods, Extensions and Applications
    (Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Faridul, H. Sheikh; Pouli, T.; Chamaret, C.; Stauder, J.; Reinhard, E.; Kuzovkin, D.; Tremeau, A.; Chen, Min and Zhang, Hao (Richard)
    The objective of colour mapping or colour transfer methods is to recolour a given image or video by deriving a mapping between that image and anOther image serving as a reference. These methods have received considerable attention in recent years, both in academic literature and industrial applications. Methods for recolouring images have often appeared under the labels of colour correction, colour transfer or colour balancing, to name a few, but their goal is always the same: mapping the colours of one image to anOther. In this paper, we present a comprehensive overview of these methods and offer a classification of current solutions depending not only on their algorithmic formulation but also their range of applications. We also provide a new dataset and a novel evaluation technique called ‘evaluation by colour mapping roundtrip’. We discuss the relative merit of each class of techniques through examples and show how colour mapping solutions can have been applied to a diverse range of problems.The objective of colour mapping or colour transfer methods is to recolour a given image or video by deriving a mapping between that image and anOther image serving as a reference. These methods have received considerable attention in recent years, both in academic literature and industrial applications. Methods for recolouring images have often appeared under the labels of colour correction, colour transfer or colour balancing, to name a few, but their goal is always the same: mapping the colours of one image to anOther. In this paper, we present a comprehensive overview of these methods and offer a classification of current solutions depending not only on their algorithmic formulation but also their range of applications.
  • Item
    Issue Information
    (Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Chen, Min and Zhang, Hao (Richard)
  • Item
    Robust Cardiac Function Assessment in 4D PC‐MRI Data of the Aorta and Pulmonary Artery
    (Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Köhler, Benjamin; Preim, Uta; Grothoff, Matthias; Gutberlet, Matthias; Fischbach, Katharina; Preim, Bernhard; Chen, Min and Zhang, Hao (Richard)
    Four‐dimensional phase‐contrast magnetic resonance imaging (4D PC‐MRI) allows the non‐invasive acquisition of time‐resolved, 3D blood flow information. Stroke volumes (SVs) and regurgitation fractions (RFs) are two of the main measures to assess the cardiac function and severity of valvular pathologies. The flow rates in forward and backward direction through a plane above the aortic or pulmonary valve are required for their quantification. Unfortunately, the calculations are highly sensitive towards the plane's angulation since orthogonally passing flow is considered. This often leads to physiologically implausible results. In this work, a robust quantification method is introduced to overcome this problem. Collaborating radiologists and cardiologists were carefully observed while estimating SVs and RFs in various healthy volunteer and patient 4D PC‐MRI data sets with conventional quantification methods, that is, using a single plane above the valve that is freely movable along the centerline. By default it is aligned perpendicular to the vessel's centerline, but free angulation (rotation) is possible. This facilitated the automation of their approach which, in turn, allows to derive statistical information about the plane angulation sensitivity. Moreover, the experts expect a continuous decrease of the blood flow volume along the vessel course. Conventional methods are often unable to produce this behaviour. Thus, we present a procedure to fit a monotonous function that ensures such physiologically plausible results. In addition, this technique was adapted for the usage in branching vessels such as the pulmonary artery. The performed informal evaluation shows the capability of our method to support diagnosis; a parameter evaluation confirms the robustness. Vortex flow was identified as one of the main causes for quantification uncertainties.Four‐dimensional phase‐contrast magnetic resonance imaging (4D PC‐MRI) allows the non‐invasive acquisition of time‐resolved, 3D blood flow information. Stroke volumes (SVs) and regurgitation fractions (RFs) are two of the main measures to assess the cardiac function and severity of valvular pathologies. The flow rates in forward and backward direction through a plane above the aortic or pulmonary valve are required for their quantification. Unfortunately, the calculations are highly sensitive towards the plane's angulation since orthogonally passing flow is considered.
  • Item
    Editorial
    (Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Chen, Min;  , Richard; Chen, Min and Zhang, Hao (Richard)
  • Item
    Graph‐Based Wavelet Representation of Multi‐Variate Terrain Data
    (Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Cioaca, Teodor; Dumitrescu, Bogdan; Stupariu, Mihai‐Sorin; Chen, Min and Zhang, Hao (Richard)
    Terrain data can be processed from the double perspective of computer graphics and graph theory. We propose a hybrid method that uses geometrical and vertex attribute information to construct a weighted graph reflecting the variability of the vertex data. As a planar graph, a generic terrain data set is subjected to a geometry‐sensitive vertex partitioning procedure. Through the use of a combined, thin‐plate energy and multi‐dimensional quadric metric error, feature estimation heuristic, we construct ‘even’ and ‘odd’ node subsets. Using an invertible lifting scheme, adapted from generic weighted graphs, detail vectors are extracted and used to recover or filter the node information. The design of the prediction and update filters improves the root mean squared error of the signal over general graph‐based approaches. As a key property of this design, preserving the mean of the graph signal becomes essential for decreasing the error measure and conserving the salient shape features.Terrain data can be processed from the double perspective of computer graphics and graph theory. We propose a hybrid method that uses geometrical and vertex attribute information to construct a weighted graph reflecting the variability of the vertex data. As a planar graph, a generic terrain data set is subjected to a geometry‐sensitive vertex partitioning procedure. Through the use of a combined, thin‐plate energy and multi‐dimensional quadric metric error, feature estimation heuristic, we construct ‘even’ and ‘odd’ node subsets. A critically‐sampled lifting scheme design, adapted from generic weighted graphs, is employed to downsample the input. The resulting detail vectors are stored for use in synthesis or filtering applications.
  • Item
    Real‐Time Rendering Techniques with Hardware Tessellation
    (Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Nießner, M.; Keinert, B.; Fisher, M.; Stamminger, M.; Loop, C.; Schäfer, H.; Chen, Min and Zhang, Hao (Richard)
    Graphics hardware has progressively been optimized to render more triangles with increasingly flexible shading. For highly detailed geometry, interactive applications restricted themselves to performing transforms on fixed geometry, since they could not incur the cost required to generate and transfer smooth or displaced geometry to the GPU at render time. As a result of recent advances in graphics hardware, in particular the GPU tessellation unit, complex geometry can now be generated on the fly within the GPU's rendering pipeline. This has enabled the generation and displacement of smooth parametric surfaces in real‐time applications. However, many well‐established approaches in offline rendering are not directly transferable due to the limited tessellation patterns or the parallel execution model of the tessellation stage. In this survey, we provide an overview of recent work and challenges in this topic by summarizing, discussing, and comparing methods for the rendering of smooth and highly detailed surfaces in real time.Graphics hardware has progressively been optimized to render more triangles with increasingly flexible shading. For highly detailed geometry, interactive applications restricted themselves to performing transforms on fixed geometry, since they could not incur the cost required to generate and transfer smooth or displaced geometry to the GPU at render time. As a result of recent advances in graphics hardware, in particular the GPU tessellation unit, complex geometry can now be generated on the fly within the GPU's rendering pipeline. This has enabled the generation and displacement of smooth parametric surfaces in real‐time applications. However, many well‐established approaches in offline rendering are not directly transferable due to the limited tessellation patterns or the parallel execution model of the tessellation stage.
  • Item
    Fast ANN for High‐Quality Collaborative Filtering
    (Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Tsai, Yun‐Ta; Steinberger, Markus; Pająk, Dawid; Pulli, Kari; Chen, Min and Zhang, Hao (Richard)
    Collaborative filtering collects similar patches, jointly filters them and scatters the output back to input patches; each pixel gets a contribution from each patch that overlaps with it, allowing signal reconstruction from highly corrupted data. Exploiting self‐similarity, however, requires finding matching image patches, which is an expensive operation. We propose a GPU‐friendly approximated‐nearest‐neighbour(ANN) algorithm that produces high‐quality results for any type of collaborative filter. We evaluate our ANN search against state‐of‐the‐art ANN algorithms in several application domains. Our method is orders of magnitudes faster, yet provides similar or higher quality results than the previous work.Collaborative filtering is a powerful, yet computationally demanding denoising approach. (a) Relying on self‐similarity in the input data, collaborative filtering requires the search for patches which are similar to a reference patch (red). Filtering the patches, either by averaging the pixels or modifying the coefficients after a wavelet or Other transformation, removes unwanted noise, and each output pixel is collaboratively filtered using all the denoised image patches that overlap the pixel. Our method accelerates the process of searching for similar patches and facilitates high‐quality collaborative filtering even on mobile devices. Application examples for collaborative filtering include (left: our output; right: noisy input) (b) denoising an image burst, (c) filtering the samples for global illumination and (d) geometry reconstruction.
  • Item
    A Hierarchical Approach for Regular Centroidal Voronoi Tessellations
    (Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Wang, L.; Hétroy‐Wheeler, F.; Boyer, E.; Chen, Min and Zhang, Hao (Richard)
    In this paper, we consider Centroidal Voronoi Tessellations (CVTs) and study their regularity. CVTs are geometric structures that enable regular tessellations of geometric objects and are widely used in shape modelling and analysis. While several efficient iterative schemes, with defined local convergence properties, have been proposed to compute CVTs, little attention has been paid to the evaluation of the resulting cell decompositions. In this paper, we propose a regularity criterion that allows us to evaluate and compare CVTs independently of their sizes and of their cell numbers. This criterion allows us to compare CVTs on a common basis. It builds on earlier theoretical work showing that second moments of cells converge to a lower bound when optimizing CVTs. In addition to proposing a regularity criterion, this paper also considers computational strategies to determine regular CVTs. We introduce a hierarchical framework that propagates regularity over decomposition levels and hence provides CVTs with provably better regularities than existing methods. We illustrate these principles with a wide range of experiments on synthetic and real models.In this paper, we consider Centroidal Voronoi Tessellations (CVTs) and study their regularity. CVTs are geometric structures that enable regular tessellations of geometric objects and are widely used in shape modelling and analysis.While several efficient iterative schemes, with defined local convergence properties, have been proposed to compute CVTs, little attention has been paid to the evaluation of the resulting cell decompositions. In this paper, we propose a regularity criterion that allows us to evaluate and compare CVTs independently of their sizes and of their cell numbers.
  • Item
    Variational Image Fusion with Optimal Local Contrast
    (Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Hafner, David; Weickert, Joachim; Chen, Min and Zhang, Hao (Richard)
    In this paper, we present a general variational method for image fusion. In particular, we combine different images of the same subject to a single composite that offers optimal exposedness, saturation and local contrast. Previous research approaches this task by first pre‐computing application‐specific weights based on the input, and then combining these weights with the images to the final composite later on. In contrast, we design our model assumptions directly on the fusion result. To this end, we formulate the output image as a convex combination of the input and incorporate concepts from perceptually inspired contrast enhancement such as a local and non‐linear response. This output‐driven approach is the key to the versatility of our general image fusion model. In this regard, we demonstrate the performance of our fusion scheme with several applications such as exposure fusion, multispectral imaging and decolourization. For all application domains, we conduct thorough validations that illustrate the improvements compared to state‐of‐the‐art approaches that are tailored to the individual tasks. In this paper, we present a general variational method for image fusion. In particular, we combine different images of the same subject to a single composite that offers optimal exposedness, saturation and local contrast. Previous research approaches this task by first pre‐computing application‐specific weights based on the input, and then combining these weights with the images to the final composite later on. In contrast, we design our model assumptions directly on the fusion result. To this end, we formulate the output image as a convex combination of the input and incorporate concepts from perceptually inspired contrast enhancement such as a local and non‐linear response. This output‐driven approach is the key to the versatility of our general image fusion model. In this regard, we demonstrate the performance of our fusion scheme with several applications such as exposure fusion, multispectral imaging and decolourization.
  • Item
    Anisotropic Strain Limiting for Quadrilateral and Triangular Cloth Meshes
    (Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Ma, Guanghui; Ye, Juntao; Li, Jituo; Zhang, Xiaopeng; Chen, Min and Zhang, Hao (Richard)
    The cloth simulation systems often suffer from excessive extension on the polygonal mesh, so an additional strain‐limiting process is typically used as a remedy in the simulation pipeline. A cloth model can be discretized as either a quadrilateral mesh or a triangular mesh, and their strains are measured differently. The edge‐based strain‐limiting method for a quadrilateral mesh creates anisotropic behaviour by nature, as discretization usually aligns the edges along the warp and weft directions. We improve this anisotropic technique by replacing the traditionally used equality constraints with inequality ones in the mathematical optimization, and achieve faster convergence. For a triangular mesh, the state‐of‐the‐art technique measures and constrains the strains along the two principal (and constantly changing) directions in a triangle, resulting in an isotropic behaviour which prohibits shearing. Based on the framework of inequality‐constrained optimization, we propose a warp and weft strain‐limiting formulation. This anisotropic model is more appropriate for textile materials that do not exhibit isotropic strain behaviour.The cloth simulation systems often suffer from excessive extension on the polygonal mesh, so an additional strain‐limiting process is typically used as a remedy in the simulation pipeline. A cloth model can be discretized as either a quadrilateral mesh or a triangular mesh, and their strains are measured differently. The edge‐based strain‐limiting method for a quadrilateral mesh creates anisotropic behaviour by nature, as discretization usually aligns the edges along the warp and weft directions.We improve this anisotropic technique by replacing the traditionally used equality constraints with inequality ones in the mathematical optimization, and achieve faster convergence. For a triangular mesh, the state‐of‐the‐art technique measures and constrains the strains along the two principal (and constantly changing) directions in a triangle, resulting in an isotropic behaviour which prohibits shearing. Based on the framework of inequality‐constrained optimization, we propose a warp and weft strain‐limiting formulation. This anisotropic model is more appropriate for textile materials that do not exhibit isotropic strain behaviour.
  • Item
    Mesh Sequence Morphing
    (Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Chen, Xue; Feng, Jieqing; Bechmann, Dominique; Chen, Min and Zhang, Hao (Richard)
    Morphing is an important technique for the generation of special effects in computer animation. However, an analogous technique has not yet been applied to the increasingly prevalent animation representation, i.e. 3D mesh sequences. In this paper, a technique for morphing between two mesh sequences is proposed to simultaneously blend motions and interpolate shapes. Based on all possible combinations of the motions and geometries, a universal framework is proposed to recreate various plausible mesh sequences. To enable a universal framework, we design a skeleton‐driven cage‐based deformation transfer scheme which can account for motion blending and geometry interpolation. To establish one‐to‐one correspondence for interpolating between two mesh sequences, a hybrid cross‐parameterization scheme that fully utilizes the skeleton‐driven cage control structure and adapts user‐specified joint‐like markers, is introduced. The experimental results demonstrate that the framework, not only accomplishes mesh sequence morphing, but also is suitable for a wide range of applications such as deformation transfer, motion blending or transition and dynamic shape interpolation.Morphing is an important technique for the generation of special effects in computer animation. However, an analogous technique has not yet been applied to the increasingly prevalent animation representation, i.e. 3D mesh sequences. In this paper, a technique for morphing between two mesh sequences is proposed to simultaneously blend motions and interpolate shapes. Based on all possible combinations of the motions and geometries, a universal framework is proposed to recreate various plausible mesh sequences. To enable a universal framework, we design a skeleton‐driven cage‐based deformation transfer scheme which can account for motion blending and geometry interpolation.
  • Item
    Practical Low‐Cost Recovery of Spectral Power Distributions
    (Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Alvarez‐Cortes, Sara; Kunkel, Timo; Masia, Belen; Chen, Min and Zhang, Hao (Richard)
    Measuring the spectral power distribution of a light source, that is, the emission as a function of wavelength, typically requires the use of spectrophotometers or multi‐spectral cameras. Here, we propose a low‐cost system that enables the recovery of the visible light spectral signature of different types of light sources without requiring highly complex or specialized equipment and using just off‐the‐shelf, widely available components. To do this, a standard Digital Single‐Lens Reflex (DSLR) camera and a diffraction filter are used, sacrificing the spatial dimension for spectral resolution. We present here the image formation model and the calibration process necessary to recover the spectrum, including spectral calibration and amplitude recovery. We also assess the robustness of our method and perform a detailed analysis exploring the parameters influencing its accuracy. Further, we show applications of the system in image processing and rendering.Measuring the spectral power distribution of a light source, that is, the emission as a function of wavelength, typically requires the use of spectrophotometers or multi‐spectral cameras. Here, we propose a low‐cost system that enables the recovery of the visible light spectral signature of different types of light sources without requiring highly complex or specialized equipment and using just off‐the‐shelf, widely available components. To do this, a standard DSLR camera and a diffraction filter are used, sacrificing the spatial dimension for spectral resolution. We present here the image formation model and the calibration process necessary to recover the spectrum, including spectral calibration and amplitude recovery.
  • Item
    Mobile Surface Reflectometry
    (Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Riviere, J.; Peers, P.; Ghosh, A.; Chen, Min and Zhang, Hao (Richard)
    We present two novel mobile reflectometry approaches for acquiring detailed spatially varying isotropic surface reflectance and mesostructure of a planar material sample using commodity mobile devices. The first approach relies on the integrated camera and flash pair present on typical mobile devices to support free‐form handheld acquisition of spatially varying rough specular material samples. The second approach, suited for highly specular samples, uses the LCD panel to illuminate the sample with polarized second‐order gradient illumination. To address the limited overlap of the front facing camera's view and the LCD illumination (and thus limited sample size), we propose a novel appearance transfer method that combines controlled reflectance measurement of a small exemplar section with uncontrolled reflectance measurements of the full sample under natural lighting. Finally, we introduce a novel surface detail enhancement method that adds fine scale surface mesostructure from close‐up observations under uncontrolled natural lighting. We demonstrate the accuracy and versatility of the proposed mobile reflectometry methods on a wide variety of spatially varying materials.We present two novel mobile reflectometry approaches for acquiring detailed spatially varying isotropic surface reflectance and mesostructure of a planar material sample using commodity mobile devices. The first approach relies on the integrated camera and flash pair present on typical mobile devices to support free‐form handheld acquisition of spatially varying rough specular material samples. The second approach, suited for highly specular samples, uses the LCD panel to illuminate the sample with polarized second‐order gradient illumination. To address the limited overlap of the front facing camera's view and the LCD illumination (and thus limited sample size), we propose a novel appearance transfer method that combines controlled reflectance measurement of a small exemplar section with uncontrolled reflectance measurements of the full sample under natural lighting. Finally, we introduce a novel surface detail enhancement method that adds fine scale surface mesostructure from close‐up observations under uncontrolled natural lighting. We demonstrate the accuracy and versatility of the proposed mobile reflectometry methods on a wide variety of spatially varying materials.
  • Item
    Environmental Objects for Authoring Procedural Scenes
    (Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Grosbellet, Francois; Peytavie, Adrien; Guérin, Éric; Galin, Éric; Mérillou, Stéphane; Benes, Bedrich; Chen, Min and Zhang, Hao (Richard)
    We propose a novel approach for authoring large scenes with automatic enhancement of objects to create geometric decoration details such as snow cover, icicles, fallen leaves, grass tufts or even trash. We introduce environmental objects that extend an input object geometry with a set of procedural effects that defines how the object reacts to the environment, and by a set of scalar fields that defines the influence of the object over of the environment. The user controls the scene by modifying environmental variables, such as temperature or humidity fields. The scene definition is hierarchical: objects can be grouped and their behaviours can be set at each level of the hierarchy. Our per object definition allows us to optimize and accelerate the effects computation, which also enables us to generate large scenes with many geometric details at a very high level of detail. In our implementation, a complex urban scene of 10 000 m, represented with details of less than 1 cm, can be locally modified and entirely regenerated in a few seconds.We propose a novel approach for authoring large scenes with automatic enhancement of objects to create geometric decoration details such as snow cover, icicles, fallen leaves, grass tufts or even trash. We introduce environmental objects that extend an input object geometry with a set of procedural effects that defines how the object reacts to the environment, and by a set of scalar fields that defines the influence of the object over of the environment. The user controls the scene by modifying environmental variables, such as temperature or humidity fields.
  • Item
    Full 3D Plant Reconstruction via Intrusive Acquisition
    (Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Yin, Kangxue; Huang, Hui; Long, Pinxin; Gaissinski, Alexei; Gong, Minglun; Sharf, Andrei; Chen, Min and Zhang, Hao (Richard)
    Digitally capturing vegetation using off‐the‐shelf scanners is a challenging problem. Plants typically exhibit large self‐occlusions and thin structures which cannot be properly scanned. Furthermore, plants are essentially dynamic, deforming over the time, which yield additional difficulties in the scanning process. In this paper, we present a novel technique for acquiring and modelling of plants and foliage. At the core of our method is an intrusive acquisition approach, which disassembles the plant into disjoint parts that can be accurately scanned and reconstructed offline. We use the reconstructed part meshes as 3D proxies for the reconstruction of the complete plant and devise a global‐to‐local non‐rigid registration technique that preserves specific plant characteristics. Our method is tested on plants of various styles, appearances and characteristics. Results show successful reconstructions with high accuracy with respect to the acquired data.Digitally capturing vegetation using off‐the‐shelf scanners is a challenging problem. Plants typically exhibit large self‐occlusions and thin structures which cannot be properly scanned. Furthermore, plants are essentially dynamic, deforming over the time, which yield additional difficulties in the scanning process. In this paper, we present a novel technique for acquiring and modelling of plants and foliage. At the core of our method is an intrusive acquisition approach, which disassembles the plant into disjoint parts that can be accurately scanned and reconstructed offline.
  • Item
    Autocorrelation Descriptor for Efficient Co‐Alignment of 3D Shape Collections
    (Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Averkiou, Melinos; Kim, Vladimir G.; Mitra, Niloy J.; Chen, Min and Zhang, Hao (Richard)
    Co‐aligning a collection of shapes to a consistent pose is a common problem in shape analysis with applications in shape matching, retrieval and visualization. We observe that resolving among some orientations is easier than Others, for example, a common mistake for bicycles is to align front‐to‐back, while even the simplest algorithm would not erroneously pick orthogonal alignment. The key idea of our work is to analyse rotational autocorrelations of shapes to facilitate shape co‐alignment. In particular, we use such an autocorrelation measure of individual shapes to decide which shape pairs might have well‐matching orientations; and, if so, which configurations are likely to produce better alignments. This significantly prunes the number of alignments to be examined, and leads to an efficient, scalable algorithm that performs comparably to state‐of‐the‐art techniques on benchmark data sets, but requires significantly fewer computations, resulting in 2–16× speed improvement in our tests.Co‐aligning a collection of shapes to a consistent pose is a common problem in shape analysis with applications in shape matching, retrieval and visualization. We observe that resolving among some orientations is easier than Others, for example, a common mistake for bicycles is to align front‐to‐back, while even the simplest algorithm would not erroneously pick orthogonal alignment. The key idea of our work is to analyse rotational autocorrelations of shapes to facilitate shape co‐alignment. In particular, we use such an autocorrelation measure of individual shapes to decide which shape pairs might have well‐matching orientations; and, if so, which configurations are likely to produce better alignments. This significantly prunes the number of alignments to be examined, and leads to an efficient, scalable algorithm that performs comparably to state‐of‐the‐art techniques on benchmark data sets, but requires significantly fewer computations, resulting in 2‐16x speed improvement in our tests.
  • Item
    Continuity and Interpolation Techniques for Computer Graphics
    (Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Gonzalez, F.; Patow, G.; Chen, Min and Zhang, Hao (Richard)
    Continuity and interpolation have been crucial topics for computer graphics since its very beginnings. Every time we want to interpolate values across some area, we need to take a set of samples over that interpolating region. However, interpolating samples faithfully allowing the results to closely match the underlying functions can be a tricky task as the functions to sample could not be smooth and, in the worst case, it could be even impossible when they are not continuous. In those situations bringing the required continuity is not an easy task, and much work has been done to solve this problem. In this paper, we focus on the state of the art in continuity and interpolation in three stages of the real‐time rendering pipeline. We study these problems and their current solutions in texture space (2D), object space (3D) and screen space. With this review of the literature in these areas, we hope to bring new light and foster research in these fundamental, yet not completely solved problems in computer graphics.Continuity and interpolation have been crucial topics for computer graphics since its very beginnings. Every time we want to interpolate values across some area, we need to take a set of samples over that interpolating region. However, interpolating samples faithfully allowing the results to closely match the underlying functions can be a tricky task as the functions to sample could not be smooth and, in the worst case, it could be even impossible when they are not continuous. In those situations bringing the required continuity is not an easy task, and much work has been done to solve this problem. In this paper, we focus on the state of the art in continuity and interpolation in three stages of the real‐time rendering pipeline.
  • Item
    Lauren
    (Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Chen, Min and Zhang, Hao (Richard)
  • Item
    State of the Art in Artistic Editing of Appearance, Lighting and Material
    (Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Schmidt, Thorsten‐Walther; Pellacini, Fabio; Nowrouzezahrai, Derek; Jarosz, Wojciech; Dachsbacher, Carsten; Chen, Min and Zhang, Hao (Richard)
    Mimicking the appearance of the real world is a longstanding goal of computer graphics, with several important applications in the feature film, architecture and medical industries. Images with well‐designed shading are an important tool for conveying information about the world, be it the shape and function of a computer‐aided design (CAD) model, or the mood of a movie sequence. However, authoring this content is often a tedious task, even if undertaken by groups of highly trained and experienced artists. Unsurprisingly, numerous methods to facilitate and accelerate this appearance editing task have been proposed, enabling the editing of scene objects' appearances, lighting and materials, as well as entailing the introduction of new interaction paradigms and specialized preview rendering techniques. In this review, we provide a comprehensive survey of artistic appearance, lighting and material editing approaches. We organize this complex and active research area in a structure tailored to academic researchers, graduate students and industry professionals alike. In addition to editing approaches, we discuss how user interaction paradigms and rendering back ends combine to form usable systems for appearance editing. We conclude with a discussion of open problems and challenges to motivate and guide future research.Mimicking the appearance of the real world is a longstanding goal of computer graphics, with several important applications in the feature film, architecture and medical industries. Images with well‐designed shading are an important tool for conveying information about the world, be it the shape and function of a computer‐aided design (CAD) model, or the mood of a movie sequence. However, authoring this content is often a tedious task, even if undertaken by groups of highly trained and experienced artists. Unsurprisingly, numerous methods to facilitate and accelerate this appearance editing task have been proposed, enabling the editing of scene objects' appearances, lighting and materials, as well as entailing the introduction of new interaction paradigms and specialized preview rendering techniques. In this review we provide a comprehensive survey of artistic appearance, lighting, and material editing approaches. We organize this complex and active research area in a structure tailored to academic researchers, graduate students, and industry professionals alike. In addition to editing approaches, we discuss how user interaction paradigms and rendering backends combine to form usable systems for appearance editing. We conclude with a discussion of open problems and challenges to motivate and guide future research.
  • Item
    Planar Shape Detection and Regularization in Tandem
    (Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Oesau, Sven; Lafarge, Florent; Alliez, Pierre; Chen, Min and Zhang, Hao (Richard)
    We present a method for planar shape detection and regularization from raw point sets. The geometric modelling and processing of man‐made environments from measurement data often relies upon robust detection of planar primitive shapes. In addition, the detection and reinforcement of regularities between planar parts is a means to increase resilience to missing or defect‐laden data as well as to reduce the complexity of models and algorithms down the modelling pipeline. The main novelty behind our method is to perform detection and regularization in tandem. We first sample a sparse set of seeds uniformly on the input point set, and then perform in parallel shape detection through region growing, interleaved with regularization through detection and reinforcement of regular relationships (coplanar, parallel and orthogonal). In addition to addressing the end goal of regularization, such reinforcement also improves data fitting and provides guidance for clustering small parts into larger planar parts. We evaluate our approach against a wide range of inputs and under four criteria: geometric fidelity, coverage, regularity and running times. Our approach compares well with available implementations such as the efficient random sample consensus–based approach proposed by Schnabel and co‐authors in 2007.We present a method for planar shape detection and regularization from raw point sets. The geometric modelling and processing of man‐made environments from measurement data often relies upon robust detection of planar primitive shapes. In addition, the detection and reinforcement of regularities between planar parts is a means to increase resilience to missing or defect‐laden data as well as to reduce the complexity of models and algorithms down the modelling pipeline. The main novelty behind our method is to perform detection and regularization in tandem. We first sample a sparse set of seeds uniformly on the input point set, and then perform in parallel shape detection through region growing, interleaved with regularization through detection and reinforcement of regular relationships (coplanar, parallel and orthogonal).
  • Item
    The State‐of‐the‐Art of Set Visualization
    (Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Alsallakh, Bilal; Micallef, Luana; Aigner, Wolfgang; Hauser, Helwig; Miksch, Silvia; Rodgers, Peter; Chen, Min and Zhang, Hao (Richard)
    Sets comprise a generic data model that has been used in a variety of data analysis problems. Such problems involve analysing and visualizing set relations between multiple sets defined over the same collection of elements. However, visualizing sets is a non‐trivial problem due to the large number of possible relations between them. We provide a systematic overview of state‐of‐the‐art techniques for visualizing different kinds of set relations. We classify these techniques into six main categories according to the visual representations they use and the tasks they support. We compare the categories to provide guidance for choosing an appropriate technique for a given problem. Finally, we identify challenges in this area that need further research and propose possible directions to address these challenges. Further resources on set visualization are available at .Sets comprise a generic data model that has been used in a variety of data analysis problems. Such problems involve analysing and visualizing set relations between multiple sets defined over the same collection of elements. However, visualizing sets is a non‐trivial problem due to the large number of possible relations between them. We provide a systematic overview of state‐of‐the‐art techniques for visualizing different kinds of set relations.We classify these techniques into six main categories according to the visual representations they use and the tasks they support. We compare the categories to provide guidance for choosing an appropriate technique for a given problem.
  • Item
    Projective Blue‐Noise Sampling
    (Copyright © 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Reinert, Bernhard; Ritschel, Tobias; Seidel, Hans‐Peter; Georgiev, Iliyan; Chen, Min and Zhang, Hao (Richard)
    We propose projective blue‐noise patterns that retain their blue‐noise characteristics when undergoing one or multiple projections onto lower dimensional subspaces. These patterns are produced by extending existing methods, such as dart throwing and Lloyd relaxation, and have a range of applications. For numerical integration, our patterns often outperform state‐of‐the‐art stochastic and low‐discrepancy patterns, which have been specifically designed only for this purpose. For image reconstruction, our method outperforms traditional blue‐noise sampling when the variation in the signal is concentrated along one dimension. Finally, we use our patterns to distribute primitives uniformly in 3D space such that their 2D projections retain a blue‐noise distribution.We propose projective blue‐noise patterns that retain their blue‐noise characteristics when undergoing one or multiple projections onto lower dimensional subspaces. These patterns are produced by extending existing methods, such as dart throwing and Lloyd relaxation, and have a range of applications. For numerical integration, our patterns often outperform state‐of‐the‐art stochastic and low‐discrepancy patterns, which have been specifically designed only for this purpose. For image reconstruction, our method outperforms traditional blue‐noise sampling when the variation in the signal is concentrated along one dimension. Finally, we use our patterns to distribute primitives uniformly in 3D space such that their 2D projections retain a blue‐noise distribution.