34-Issue 1

Permanent URI for this collection

Editorial

Editorial

Deussen, Oliver
Zhang, Hao (Richard)
Articles

Probably Approximately Symmetric: Fast Rigid Symmetry Detection With Global Guarantees

Korman, Simon
Litman, Roee
Avidan, Shai
Bronstein, Alex
Articles

Local Painting and Deformation of Meshes on the GPU

Schäfer, H.
Keinert, B.
Nießner, M.
Stamminger, M.
Articles

Boundary Handling at Cloth–Fluid Contact

Huber, M.
Eberhardt, B.
Weiskopf, D.
Articles

Example‐Based Materials in Laplace–Beltrami Shape Space

Zhu, Fei
Li, Sheng
Wang, Guoping
Articles

Memory Considerations for Low Energy Ray Tracing

Kopta, D.
Shkurko, K.
Spjut, J.
Brunvand, E.
Davis, A.
Articles

Collective Crowd Formation Transform with Mutual Information–Based Runtime Feedback

Xu, Mingliang
Wu, Yunpeng
Ye, Yangdong
Farkas, Illes
Jiang, Hao
Deng, Zhigang
Articles

Semi‐Regular Triangle Remeshing: A Comprehensive Study

Payan, F.
Roudet, C.
Sauvage, B.
Articles

Hybrid Data Visualization Based on Depth Complexity Histogram Analysis

Lindholm, S.
Falk, M.
Sundén, E.
Bock, A.
Ynnerman, A.
Ropinski, T.
Articles

Real‐Time Isosurface Extraction With View‐Dependent Level of Detail and Applications

Scholz, Manuel
Bender, Jan
Dachsbacher, Carsten
Articles

A Visualization Tool Used to Develop New Photon Mapping Techniques

Spencer, B.
Jones, M. W.
Lim, I. S.
Articles

Purkinje Images: Conveying Different Content for Different Luminance Adaptations in a Single Image

Arpa, Sami
Ritschel, Tobias
Myszkowski, Karol
Çapın, Tolga
Seidel, Hans‐Peter
Articles

Advances in Interaction with 3D Environments

Jankowski, J.
Hachet, M.
Articles

Stable and Fast Fluid–Solid Coupling for Incompressible SPH

Shao, X.
Zhou, Z.
Magnenat‐Thalmann, N.
Wu, W.
Articles

Data‐Driven Automatic Cropping Using Semantic Composition Search

Samii, A.
Měch, R.
Lin, Z.
Articles

Example‐Based Retargeting of Human Motion to Arbitrary Mesh Models

Celikcan, Ufuk
Yaz, Ilker O.
Capin, Tolga
Articles

Filtering Multi‐Layer Shadow Maps for Accurate Soft Shadows

Selgrad, K.
Dachsbacher, C.
Meyer, Q.
Stamminger, M.
Articles

A Vectorial Framework for Ray Traced Diffusion Curves

Prévost, Romain
Jarosz, Wojciech
Sorkine‐Hornung, Olga
Articles

Seamless, Static Multi‐Texturing of 3D Meshes

Pagés, R.
Berjón, D.
Morán, F.
García, N.
Articles

Partial Shape Matching Using Transformation Parameter Similarity

Guerrero, Paul
Auzinger, Thomas
Wimmer, Michael
Jeschke, Stefan
Articles

Visualizing the Evolution of Communities in Dynamic Graphs

Vehlow, C.
Beck, F.
Auwärter, P.
Weiskopf, D.
Issue Information

Issue Information

Articles

Sample‐Based Manifold Filtering for Interactive Global Illumination and Depth of Field

Bauszat, P.
Eisemann, M.
John, S.
Magnor, M.


BibTeX (34-Issue 1)
                
@article{
10.1111:cgf.12534,
journal = {Computer Graphics Forum}, title = {{
Editorial}},
author = {
Deussen, Oliver
and
Zhang, Hao (Richard)
}, year = {
2015},
publisher = {
Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12534}
}
                
@article{
10.1111:cgf.12454,
journal = {Computer Graphics Forum}, title = {{
Probably Approximately Symmetric: Fast Rigid Symmetry Detection With Global Guarantees}},
author = {
Korman, Simon
and
Litman, Roee
and
Avidan, Shai
and
Bronstein, Alex
}, year = {
2015},
publisher = {
Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12454}
}
                
@article{
10.1111:cgf.12456,
journal = {Computer Graphics Forum}, title = {{
Local Painting and Deformation of Meshes on the GPU}},
author = {
Schäfer, H.
and
Keinert, B.
and
Nießner, M.
and
Stamminger, M.
}, year = {
2015},
publisher = {
Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12456}
}
                
@article{
10.1111:cgf.12455,
journal = {Computer Graphics Forum}, title = {{
Boundary Handling at Cloth–Fluid Contact}},
author = {
Huber, M.
and
Eberhardt, B.
and
Weiskopf, D.
}, year = {
2015},
publisher = {
Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12455}
}
                
@article{
10.1111:cgf.12457,
journal = {Computer Graphics Forum}, title = {{
Example‐Based Materials in Laplace–Beltrami Shape Space}},
author = {
Zhu, Fei
and
Li, Sheng
and
Wang, Guoping
}, year = {
2015},
publisher = {
Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12457}
}
                
@article{
10.1111:cgf.12458,
journal = {Computer Graphics Forum}, title = {{
Memory Considerations for Low Energy Ray Tracing}},
author = {
Kopta, D.
and
Shkurko, K.
and
Spjut, J.
and
Brunvand, E.
and
Davis, A.
}, year = {
2015},
publisher = {
Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12458}
}
                
@article{
10.1111:cgf.12459,
journal = {Computer Graphics Forum}, title = {{
Collective Crowd Formation Transform with Mutual Information–Based Runtime Feedback}},
author = {
Xu, Mingliang
and
Wu, Yunpeng
and
Ye, Yangdong
and
Farkas, Illes
and
Jiang, Hao
and
Deng, Zhigang
}, year = {
2015},
publisher = {
Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12459}
}
                
@article{
10.1111:cgf.12461,
journal = {Computer Graphics Forum}, title = {{
Semi‐Regular Triangle Remeshing: A Comprehensive Study}},
author = {
Payan, F.
and
Roudet, C.
and
Sauvage, B.
}, year = {
2015},
publisher = {
Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12461}
}
                
@article{
10.1111:cgf.12460,
journal = {Computer Graphics Forum}, title = {{
Hybrid Data Visualization Based on Depth Complexity Histogram Analysis}},
author = {
Lindholm, S.
and
Falk, M.
and
Sundén, E.
and
Bock, A.
and
Ynnerman, A.
and
Ropinski, T.
}, year = {
2015},
publisher = {
Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12460}
}
                
@article{
10.1111:cgf.12462,
journal = {Computer Graphics Forum}, title = {{
Real‐Time Isosurface Extraction With View‐Dependent Level of Detail and Applications}},
author = {
Scholz, Manuel
and
Bender, Jan
and
Dachsbacher, Carsten
}, year = {
2015},
publisher = {
Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12462}
}
                
@article{
10.1111:cgf.12464,
journal = {Computer Graphics Forum}, title = {{
A Visualization Tool Used to Develop New Photon Mapping Techniques}},
author = {
Spencer, B.
and
Jones, M. W.
and
Lim, I. S.
}, year = {
2015},
publisher = {
Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12464}
}
                
@article{
10.1111:cgf.12463,
journal = {Computer Graphics Forum}, title = {{
Purkinje Images: Conveying Different Content for Different Luminance Adaptations in a Single Image}},
author = {
Arpa, Sami
and
Ritschel, Tobias
and
Myszkowski, Karol
and
Çapın, Tolga
and
Seidel, Hans‐Peter
}, year = {
2015},
publisher = {
Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12463}
}
                
@article{
10.1111:cgf.12466,
journal = {Computer Graphics Forum}, title = {{
Advances in Interaction with 3D Environments}},
author = {
Jankowski, J.
and
Hachet, M.
}, year = {
2015},
publisher = {
Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12466}
}
                
@article{
10.1111:cgf.12467,
journal = {Computer Graphics Forum}, title = {{
Stable and Fast Fluid–Solid Coupling for Incompressible SPH}},
author = {
Shao, X.
and
Zhou, Z.
and
Magnenat‐Thalmann, N.
and
Wu, W.
}, year = {
2015},
publisher = {
Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12467}
}
                
@article{
10.1111:cgf.12465,
journal = {Computer Graphics Forum}, title = {{
Data‐Driven Automatic Cropping Using Semantic Composition Search}},
author = {
Samii, A.
and
Měch, R.
and
Lin, Z.
}, year = {
2015},
publisher = {
Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12465}
}
                
@article{
10.1111:cgf.12507,
journal = {Computer Graphics Forum}, title = {{
Example‐Based Retargeting of Human Motion to Arbitrary Mesh Models}},
author = {
Celikcan, Ufuk
and
Yaz, Ilker O.
and
Capin, Tolga
}, year = {
2015},
publisher = {
Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12507}
}
                
@article{
10.1111:cgf.12506,
journal = {Computer Graphics Forum}, title = {{
Filtering Multi‐Layer Shadow Maps for Accurate Soft Shadows}},
author = {
Selgrad, K.
and
Dachsbacher, C.
and
Meyer, Q.
and
Stamminger, M.
}, year = {
2015},
publisher = {
Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12506}
}
                
@article{
10.1111:cgf.12510,
journal = {Computer Graphics Forum}, title = {{
A Vectorial Framework for Ray Traced Diffusion Curves}},
author = {
Prévost, Romain
and
Jarosz, Wojciech
and
Sorkine‐Hornung, Olga
}, year = {
2015},
publisher = {
Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12510}
}
                
@article{
10.1111:cgf.12508,
journal = {Computer Graphics Forum}, title = {{
Seamless, Static Multi‐Texturing of 3D Meshes}},
author = {
Pagés, R.
and
Berjón, D.
and
Morán, F.
and
García, N.
}, year = {
2015},
publisher = {
Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12508}
}
                
@article{
10.1111:cgf.12509,
journal = {Computer Graphics Forum}, title = {{
Partial Shape Matching Using Transformation Parameter Similarity}},
author = {
Guerrero, Paul
and
Auzinger, Thomas
and
Wimmer, Michael
and
Jeschke, Stefan
}, year = {
2015},
publisher = {
Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12509}
}
                
@article{
10.1111:cgf.12512,
journal = {Computer Graphics Forum}, title = {{
Visualizing the Evolution of Communities in Dynamic Graphs}},
author = {
Vehlow, C.
and
Beck, F.
and
Auwärter, P.
and
Weiskopf, D.
}, year = {
2015},
publisher = {
Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12512}
}
                
@article{
10.1111:cgf.12535,
journal = {Computer Graphics Forum}, title = {{
Issue Information}},
author = {}, year = {
2015},
publisher = {
Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12535}
}
                
@article{
10.1111:cgf.12511,
journal = {Computer Graphics Forum}, title = {{
Sample‐Based Manifold Filtering for Interactive Global Illumination and Depth of Field}},
author = {
Bauszat, P.
and
Eisemann, M.
and
John, S.
and
Magnor, M.
}, year = {
2015},
publisher = {
Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12511}
}

Browse

Recent Submissions

Now showing 1 - 23 of 23
  • Item
    Editorial
    (Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd., 2015) Deussen, Oliver; Zhang, Hao (Richard); Deussen, Oliver and Zhang, Hao (Richard)
  • Item
    Probably Approximately Symmetric: Fast Rigid Symmetry Detection With Global Guarantees
    (Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd., 2015) Korman, Simon; Litman, Roee; Avidan, Shai; Bronstein, Alex; Deussen, Oliver and Zhang, Hao (Richard)
    We present a fast algorithm for global rigid symmetry detection with approximation guarantees. The algorithm is guaranteed to find the best approximate symmetry of a given shape, to within a user‐specified threshold, with very high probability. Our method uses a carefully designed sampling of the transformation space, where each transformation is efficiently evaluated using a sublinear algorithm. We prove that the density of the sampling depends on the total variation of the shape, allowing us to derive formal bounds on the algorithm's complexity and approximation quality. We further investigate different volumetric shape representations (in the form of truncated distance transforms), and in such a way control the total variation of the shape and hence the sampling density and the runtime of the algorithm. A comprehensive set of experiments assesses the proposed method, including an evaluation on the eight categories of the COSEG data set. This is the first large‐scale evaluation of any symmetry detection technique that we are aware of.We present a fast algorithm for global rigid symmetry detection with approximation guarantees. The algorithm is guaranteed to find the best approximate symmetry of a given shape, to within a user‐specified threshold, with very high probability. Our method uses a carefully designed sampling of the transformation space, where each transformation is efficiently evaluated using a sublinear algorithm. We prove that the density of the sampling depends on the total variation of the shape, allowing us to derive formal bounds on the algorithm's complexity and approximation quality.
  • Item
    Local Painting and Deformation of Meshes on the GPU
    (Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd., 2015) Schäfer, H.; Keinert, B.; Nießner, M.; Stamminger, M.; Deussen, Oliver and Zhang, Hao (Richard)
    We present a novel method to adaptively apply modifications to scene data stored in GPU memory. Such modifications may include interactive painting and sculpting operations in an authoring tool, or deformations resulting from collisions between scene objects detected by a physics engine. We only allocate GPU memory for the faces affected by these modifications to store fine‐scale colour or displacement values. This requires dynamic GPU memory management in order to assign and adaptively apply edits to individual faces at runtime. We present such a memory management technique based on a scan‐operation that is efficiently parallelizable. Since our approach runs entirely on the GPU, we avoid costly CPU–GPU memory transfer and eliminate typical bandwidth limitations. This minimizes runtime overhead to under a millisecond and makes our method ideally suited to many real‐time applications such as video games and interactive authoring tools. In addition, our algorithm significantly reduces storage requirements and allows for much higher resolution content compared to traditional global texturing approaches. Our technique can be applied to various mesh representations, including Catmull–Clark subdivision surfaces, as well as standard triangle and quad meshes. In this paper, we demonstrate several scenarios for these mesh types where our algorithm enables adaptive mesh refinement, local surface deformations and interactive on‐mesh painting and sculpting.We present a novel method to adaptively apply modifications to scene data stored in GPU memory. Such modifications may include interactive painting and sculpting operations in an authoring tool, or deformations resulting from collisions between scene objects detected by a physics engine. We only allocate GPU memory for the faces affected by these modifications to store fine‐scale color or displacement values.
  • Item
    Boundary Handling at Cloth–Fluid Contact
    (Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd., 2015) Huber, M.; Eberhardt, B.; Weiskopf, D.; Deussen, Oliver and Zhang, Hao (Richard)
    We present a robust and efficient method for the two‐way coupling between particle‐based fluid simulations and infinitesimally thin solids represented by triangular meshes. Our approach is based on a hybrid method that combines a repulsion force approach with a continuous intersection handling to guarantee that no penetration occurs. Moreover, boundary conditions for the tangential component of the fluid's velocity are implemented to model the different slip conditions. The proposed method is particularly useful for dynamic surfaces, like cloth and thin shells. In addition, we demonstrate how standard fluid surface reconstruction algorithms can be modified to prevent the calculated surface from intersecting close objects. For both the two‐way coupling and the surface reconstruction, we take into account that the fluid can wet the cloth. We have implemented our approach for the bidirectional interaction between liquid simulations based on Smoothed Particle Hydrodynamics (SPH) and standard mesh‐based cloth simulation systems.We present a robust and efficient method for the two‐way coupling between particle‐based fluid simulations and infinitesimally thin solids represented by triangular meshes. Our approach is based on a hybrid method that combines a repulsion force approach with a continuous intersection handling to guarantee that no penetration occurs.
  • Item
    Example‐Based Materials in Laplace–Beltrami Shape Space
    (Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd., 2015) Zhu, Fei; Li, Sheng; Wang, Guoping; Deussen, Oliver and Zhang, Hao (Richard)
    We present a novel method for flexible and efficient simulation of example‐based elastic deformation. The geometry of all input shapes is projected into a common shape space spanned by the Laplace–Beltrami eigenfunctions. The eigenfunctions are coupled to be compatible across shapes. Shape representation in the common shape space is scale‐invariant and topology‐independent. The limitation of previous example‐based approaches is circumvented that all examples must have identical topology with the simulated object. Additionally, our method allows examples that are arbitrary in size, similar but not identical in shape with the object. We interpolate the examples via a weighted‐energy minimization to find the target configuration that guides the object to desired deformation. Large deformation between examples is handled by a physically plausible energy metric. This optimization is efficient as the eigenfunctions are pre‐computed and the problem dimension is small. We demonstrate the benefits of our approach with animation results and performance analysis.We present a novel method for flexible and efficient simulation of example‐based elastic deformation. The geometry of all input shapes is projected into a common shape space spanned by the Laplace–Beltrami eigenfunctions. The eigenfunctions are coupled to be compatible across shapes. Shape representation in the common shape space is scale‐invariant and topology‐independent. The limitation of previous example‐based approaches is circumvented that all examples must have identical topology with the simulated object.
  • Item
    Memory Considerations for Low Energy Ray Tracing
    (Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd., 2015) Kopta, D.; Shkurko, K.; Spjut, J.; Brunvand, E.; Davis, A.; Deussen, Oliver and Zhang, Hao (Richard)
    We propose two hardware mechanisms to decrease energy consumption on massively parallel graphics processors for ray tracing. First, we use a streaming data model and configure part of the L2 cache into a ray stream memory to enable efficient data processing through ray reordering. This increases L1 hit rates and reduces off‐chip memory energy substantially through better management of off‐chip memory access patterns. To evaluate this model, we augment our architectural simulator with a detailed memory system simulation that includes accurate control, timing and power models for memory controllers and off‐chip dynamic random‐access memory . These details change the results significantly over previous simulations that used a simpler model of off‐chip memory, indicating that this type of memory system simulation is important for realistic simulations that involve external memory. Secondly, we employ reconfigurable special‐purpose pipelines that are constructed dynamically under program control. These pipelines use shared execution units that can be configured to support the common compute kernels that are the foundation of the ray tracing algorithm. This reduces the overhead incurred by on‐chip memory and register accesses. These two synergistic features yield a ray tracing architecture that reduces energy by optimizing both on‐chip and off‐chip memory activity when compared to a more traditional approach.We propose two hardware mechanisms to decrease energy consumption on massively parallel graphics processors for ray tracing. First, we use a streaming data model and configure part of the L2 cache into a ray stream memory to enable efficient data processing through ray reordering. This increases L1 hit rates and reduces off‐chip memory energy substantially through better management of off‐chip memory access patterns. Secondly, we employ reconfigurable special‐purpose compute pipelines that are constructed dynamically under program control. These two synergistic features yield a ray tracing architecture that reduces energy by optimizing both on‐chip and off‐chip memory activity when compared to a more traditional approach. To evaluate this model, we augment our architectural simulator with a detailed off‐chip memory system simulation that includes accurate control, timing and power models for memory controllers and DRAM.
  • Item
    Collective Crowd Formation Transform with Mutual Information–Based Runtime Feedback
    (Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd., 2015) Xu, Mingliang; Wu, Yunpeng; Ye, Yangdong; Farkas, Illes; Jiang, Hao; Deng, Zhigang; Deussen, Oliver and Zhang, Hao (Richard)
    This paper introduces a new crowd formation transform approach to achieve visually pleasing group formation transition and control. Its core idea is to transform crowd formation shapes with a least effort pair assignment using the Kuhn–Munkres algorithm, discover clusters of agent subgroups using affinity propagation and Delaunay triangulation algorithms and apply subgroup‐based social force model (SFM) to the agent subgroups to achieve alignment, cohesion and collision avoidance. Meanwhile, mutual information of the dynamic crowd is used to guide agents' movement at runtime. This approach combines both macroscopic (involving least effort position assignment and clustering) and microscopic (involving SFM) controls of the crowd transformation to maximally maintain subgroups' local stability and dynamic collective behaviour, while minimizing the overall effort (i.e. travelling distance) of the agents during the transformation. Through simulation experiments and comparisons, we demonstrate that this approach is efficient and effective to generate visually pleasing and smooth transformations and outperform several existing crowd simulation approaches including reciprocal velocity avoidances, optimal reciprocal collision avoidance and OpenSteer.This paper introduces a new crowd formation transform approach to achieve visually pleasing group formation transition and control. Its core idea is to transform crowd formation shapes with a least‐effort pair assignment using the Kuhn–Munkres algorithm, discover clusters of agent subgroups using affinity propagation and Delaunay triangulation algorithms, and apply subgroup‐based SFM (social force model) to the agent subgroups to achieve alignment, cohesion and collision avoidance.
  • Item
    Semi‐Regular Triangle Remeshing: A Comprehensive Study
    (Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd., 2015) Payan, F.; Roudet, C.; Sauvage, B.; Deussen, Oliver and Zhang, Hao (Richard)
    Semi‐regular triangle remeshing algorithms convert irregular surface meshes into semi‐regular ones. Especially in the field of computer graphics, semi‐regularity is an interesting property because it makes meshes highly suitable for multi‐resolution analysis. In this paper, we survey the numerous remeshing algorithms that have been developed over the past two decades. We propose different classifications to give new and comprehensible insights into both existing methods and issues. We describe how considerable obstacles have already been overcome, and discuss promising perspectives.Semi‐regular triangle remeshing algorithms convert irregular surface meshes into semi‐regular ones. Especially in the field of computer graphics, semi‐regularity is an interesting property because it makes meshes highly suitable for multiresolution analysis.In this paper, we survey the numerous remeshing algorithms that have been developed over the past two decades.We propose different classifications to give new and comprehensible insights into both existing methods and issues. We describe how considerable obstacles have already been overcome, and discuss promising perspectives.
  • Item
    Hybrid Data Visualization Based on Depth Complexity Histogram Analysis
    (Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd., 2015) Lindholm, S.; Falk, M.; Sundén, E.; Bock, A.; Ynnerman, A.; Ropinski, T.; Deussen, Oliver and Zhang, Hao (Richard)
    In many cases, only the combination of geometric and volumetric data sets is able to describe a single phenomenon under observation when visualizing large and complex data. When semi‐transparent geometry is present, correct rendering results require sorting of transparent structures. Additional complexity is introduced as the contributions from volumetric data have to be partitioned according to the geometric objects in the scene. The A‐buffer, an enhanced framebuffer with additional per‐pixel information, has previously been introduced to deal with the complexity caused by transparent objects. In this paper, we present an optimized rendering algorithm for hybrid volume‐geometry data based on the A‐buffer concept. We propose two novel components for modern GPUs that tailor memory utilization to the depth complexity of individual pixels. The proposed components are compatible with modern A‐buffer implementations and yield performance gains of up to eight times compared to existing approaches through reduced allocation and reuse of fast cache memory. We demonstrate the applicability of our approach and its performance with several examples from molecular biology, space weather and medical visualization containing both, volumetric data and geometric structures.We present an A‐buffer based algorithm that achieves performance gains of up to eight times relative existing techniques. The algorithm contains two novel components which improve the utilization of the local cache memory on the GPU. This is particularly important for scenes with non‐uniform depth complexities and rapidly decreasing depth complexity histograms (DCHs).
  • Item
    Real‐Time Isosurface Extraction With View‐Dependent Level of Detail and Applications
    (Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd., 2015) Scholz, Manuel; Bender, Jan; Dachsbacher, Carsten; Deussen, Oliver and Zhang, Hao (Richard)
    Volumetric scalar data sets are common in many scientific, engineering and medical applications where they originate from measurements or simulations. Furthermore, they can represent geometric scene content, e.g. as distance or density fields. Often isosurfaces are extracted, either for indirect volume visualization in the former category, or to simply obtain a polygonal representation in case of the latter. However, even moderately sized volume data sets can result in complex isosurfaces which are challenging to recompute in real time, e.g. when the user modifies the isovalue or when the data itself are dynamic. In this paper, we present a GPU‐friendly algorithm for the extraction of isosurfaces, which provides adaptive level of detail rendering with view‐dependent tessellation. It is based on a longest edge bisection scheme where the resulting tetrahedral cells are subdivided into four hexahedra, which then form the domain for the subsequent isosurface extraction step. Our algorithm generates meshes with good triangle quality even for highly non‐linear scalar data. In contrast to previous methods, it does not require any stitching between regions of different levels of detail. As all computation is performed at run time and no pre‐processing is required, the algorithm naturally supports dynamic data and allows us to change isovalues at any time.Volumetric scalar data sets are common in many scientific, engineering and medical applications where they originate from measurements or simulations. Furthermore, they can represent geometric scene content, e.g. as distance or density fields. Often isosurfaces are extracted, either for indirect volume visualization in the former category, or to simply obtain a polygonal representation in case of the latter. However, even moderately sized volume data sets can result in complex isosurfaces which are challenging to recompute in real time, e.g. when the user modifies the isovalue or when the data itself are dynamic.
  • Item
    A Visualization Tool Used to Develop New Photon Mapping Techniques
    (Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd., 2015) Spencer, B.; Jones, M. W.; Lim, I. S.; Deussen, Oliver and Zhang, Hao (Richard)
    We present a visualization tool aimed specifically at the development and optimization of photon map denoising methods. Our tool allows the rapid testing of hypotheses and algorithms through the use of parallel coordinates, domain‐specific scripting, colour mapping and point plots. Interaction is carried out by brushing, adjusting parameters and focus‐plus‐context and yields interactive visual feedback and debugging information. We demonstrate the use of the tool to explore high‐dimensional photon map data, facilitating the discovery of novel parameter spaces which can be used to dissociate complex caustic illumination. We then show how these new parametrizations may be used to improve upon pre‐existing noise removal methods in the context of the photon relaxation framework.We present a visualization tool aimed specifically at the development and optimization of photon map denoising methods. Our tool allows the rapid testing of hypotheses and algorithms through the use of parallel coordinates, domain‐specific scripting, colour mapping and point plots. Interaction is carried out by brushing, adjusting parameters and focus‐plus‐context and yields interactive visual feedback and debugging information. We demonstrate the use of the tool to explore high‐dimensional photon map data, facilitating the discovery of novel parameter spaces which can be used to dissociate complex caustic illumination.
  • Item
    Purkinje Images: Conveying Different Content for Different Luminance Adaptations in a Single Image
    (Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd., 2015) Arpa, Sami; Ritschel, Tobias; Myszkowski, Karol; Çapın, Tolga; Seidel, Hans‐Peter; Deussen, Oliver and Zhang, Hao (Richard)
    Providing multiple meanings in a single piece of art has always been intriguing to both artists and observers. We present Purkinje images, which have different interpretations depending on the luminance adaptation of the observer. Finding such images is an optimization that minimizes the sum of the distance to one reference image in photopic conditions and the distance to another reference image in scotopic conditions. To model the shift of image perception between day and night vision, we decompose the input images into a Laplacian pyramid. Distances under different observation conditions in this representation are independent between pyramid levels and pixel positions and become matrix multiplications. The optimal pixel colour can be found by inverting a small, per‐pixel linear system in real time on a GPU. Finally, two user studies analyze our results in terms of the recognition performance and fidelity with respect to the reference images.Providing multiple meanings in a single piece of art has always been intriguing to both artists and observers. We present Purkinje images, which have different interpretations depending on the luminance adaptation of the observer. Finding such images is an optimization that minimizes the sum of the distance to one reference image in photopic conditions and the distance to another reference image in scotopic conditions. To model the shift of image perception between day and night vision, we decompose the input images into a Laplacian pyramid.
  • Item
    Advances in Interaction with 3D Environments
    (Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd., 2015) Jankowski, J.; Hachet, M.; Deussen, Oliver and Zhang, Hao (Richard)
    Various interaction techniques have been developed for interactive 3D environments. This paper presents an up‐to‐date and comprehensive review of the state of the art of non‐immersive interaction techniques for , , and , including a basic introduction to the topic, the challenges and an examination of a number of popular approaches. We also introduce 3D Interaction Testbed (3DIT) to firstly allow a ‘' understanding of 3D interaction principles, and secondly to create an open platform for defining evaluation methods, stimuli as well as representative tasks akin to those found in other disciplines of science. We hope that this survey can aid both researchers and developers of interactive 3D applications in having a clearer overview of the topic and in particular can be useful for practitioners and researchers that are new to the field of interactive 3D graphics.Various interaction techniques have been developed for interactive 3D environments. This paper presents an up‐to‐date and comprehensive review of the state of the art of non‐immersive interaction techniques for , , and , including a basic introduction to the topic, the challenges and an examination of a number of popular approaches. We also introduce 3D Interaction Testbed (3DIT) to firstly allow a ‘' understanding of 3D interaction principles, and secondly to create an open platform for defining evaluation methods, stimuli as well as representative tasks akin to those found in other disciplines of science.
  • Item
    Stable and Fast Fluid–Solid Coupling for Incompressible SPH
    (Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd., 2015) Shao, X.; Zhou, Z.; Magnenat‐Thalmann, N.; Wu, W.; Deussen, Oliver and Zhang, Hao (Richard)
    The solid boundary handling has been a research focus in physically based fluid animation. In this paper, we propose a novel stable and fast particle method to couple predictive–corrective incompressible and geometric lattice shape matching (LSM), which animates the visually realistic interaction of fluids and deformable solids allowing larger time steps or velocity differences. By combining the boundary particles sampled from solids with a momentum‐conserving velocity‐position correction scheme, our approach can alleviate the particle deficiency issues and prevent the penetration artefacts at the fluid–solid interfaces simultaneously. We further simulate the stable deformation and melting of solid objects coupled to fluids based on a highly extended LSM model. In order to improve the time performance of each time step, we entirely implement the unified particle framework on GPUs using compute unified device architecture. The advantages of our two‐way fluid–solid coupling method in computer animation are demonstrated via several virtual scenarios.The solid boundary handling has been a research focus in physically based fluid animation. In this paper, we propose a novel stable and fast particle method to couple Predictive‐Corrective Incompressible SPH (PCISPH) and geometric Lattice Shape Matching (LSM), which animates the visually realistic interaction of fluids and deformable solids allowing larger time steps or velocity differences.
  • Item
    Data‐Driven Automatic Cropping Using Semantic Composition Search
    (Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd., 2015) Samii, A.; Měch, R.; Lin, Z.; Deussen, Oliver and Zhang, Hao (Richard)
    We present a data‐driven method for automatically cropping photographs to be well‐composed and aesthetically pleasing. Our method matches the composition of an amateur's photograph to an expert's using point correspondences. The correspondences are based on a novel high‐level local descriptor we term the ‘Object Context’. Object Context is an extension of Shape Context: it is a descriptor encoding which objects and scene elements surround a given point. By searching a database of expertly composed images, we can find a crop window which makes an amateur's photograph closely match the composition of a database exemplar. We cull irrelevant matches in the database efficiently using a global descriptor which encodes the objects in the scene. For images with similar content in the database, we efficiently search the space of possible crops using generalized Hough voting. When comparing the result of our algorithm to expert crops, our crop windows overlap the expert crops by 83.6%. We also perform a user study which shows that our crops compare favourably to an expert humans' crops.We present a data‐driven method for automatically cropping photographs to be well‐composed and aesthetically pleasing. Our method matches the composition of an amateur's photograph to an expert's using point correspondences. The correspondences are based on a novel high‐level local descriptor we term the ‘Object Context’. Object Context is an extension of Shape Context: it is a descriptor encoding which objects and scene elements surround a given point. By searching a database of expertly composed images, we can find a crop window which makes an amateur's photograph closely match the composition of a database exemplar. We cull irrelevant matches in the database efficiently using a global descriptor which encodes the objects in the scene.
  • Item
    Example‐Based Retargeting of Human Motion to Arbitrary Mesh Models
    (Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd., 2015) Celikcan, Ufuk; Yaz, Ilker O.; Capin, Tolga; Deussen, Oliver and Zhang, Hao (Richard)
    We present a novel method for retargeting human motion to arbitrary 3D mesh models with as little user interaction as possible. Traditional motion‐retargeting systems try to preserve the original motion, while satisfying several motion constraints. Our method uses a few pose‐to‐pose examples provided by the user to extract the desired semantics behind the retargeting process while not limiting the transfer to being only literal. Thus, mesh models with different structures and/or motion semantics from humanoid skeletons become possible targets. Also considering the fact that most publicly available mesh models lack additional structure (e.g. skeleton), our method dispenses with the need for such a structure by means of a built‐in surface‐based deformation system. As deformation for animation purposes may require non‐rigid behaviour, we augment existing rigid deformation approaches to provide volume‐preserving and squash‐and‐stretch deformations. We demonstrate our approach on well‐known mesh models along with several publicly available motion‐capture sequences.We present a novel method for retargeting human motion to arbitrary 3D mesh models with as little user interaction as possible. Traditional motion‐retargeting systems try to preserve the original motion, while satisfying several motion constraints. Our method uses a few pose‐to‐pose examples provided by the user to extract the desired semantics behind the retargeting process while not limiting the transfer to being only literal. Thus, mesh models with different structures and/or motion semantics from humanoid skeletons become possible targets.
  • Item
    Filtering Multi‐Layer Shadow Maps for Accurate Soft Shadows
    (Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd., 2015) Selgrad, K.; Dachsbacher, C.; Meyer, Q.; Stamminger, M.; Deussen, Oliver and Zhang, Hao (Richard)
    In this paper, we introduce a novel technique for pre‐filtering multi‐layer shadow maps. The occluders in the scene are stored as variable‐length lists of fragments for each texel. We show how this representation can be filtered by progressively merging these lists. In contrast to previous pre‐filtering techniques, our method better captures the distribution of depth values, resulting in a much higher shadow quality for overlapping occluders and occluders with different depths. The pre‐filtered maps are generated and evaluated directly on the GPU, and provide efficient queries for shadow tests with arbitrary filter sizes. Accurate soft shadows are rendered in real‐time even for complex scenes and difficult setups. Our results demonstrate that our pre‐filtered maps are general and particularly scalable.In this paper, we introduce a novel technique for pre‐filtering multi‐layer shadow maps. The occluders in the scene are stored as variable‐length lists of fragments for each texel. We show how this representation can be filtered by progressively merging these lists. In contrast to previous pre‐filtering techniques, our method better captures the distribution of depth values, resulting in a much higher shadow quality for overlapping occluders and occluders with different depths. The pre‐filtered maps are generated and evaluated directly on the GPU, and provide efficient queries for shadow tests with arbitrary filter sizes.
  • Item
    A Vectorial Framework for Ray Traced Diffusion Curves
    (Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd., 2015) Prévost, Romain; Jarosz, Wojciech; Sorkine‐Hornung, Olga; Deussen, Oliver and Zhang, Hao (Richard)
    Diffusion curves allow creating complex, smoothly shaded images by diffusing colours defined at curves. These methods typically require the solution of a global optimization problem (over either the pixel grid or an intermediate tessellated representation) to produce the final image, making fully parallel implementation challenging. An alternative approach, inspired by global illumination, uses 2D ray tracing to independently compute each pixel value. This formulation allows trivial parallelism, but it densely computes values even in smooth regions and sacrifices support for instancing and layering. We describe a sparse, ray traced, multi‐layer framework that incorporates many complementary benefits of these existing approaches. Our solution avoids the need for a global solve and trivially allows parallel GPU implementation. We leverage an intermediate triangular representation with cubic patches to synthesize smooth images faithful to the per‐pixel solution. The triangle mesh provides a resolution–independent, vectorial representation and naturally maps diffusion curve images to a form natively supported by standard vector graphics and triangle rasterization pipelines. Our approach supports many features which were previously difficult to incorporate into a single system, including instancing, layering, alpha blending, texturing, local blurring, continuity control and parallel computation. We also show how global diffusion curves can be combined with local painted strokes in one coherent system.Diffusion curves allow creating complex, smoothly shaded images by diffusing colours defined at curves. These methods typically require the solution of a global optimization problem (over either the pixel grid or an intermediate tessellated representation) to produce the final image, making fully parallel implementation challenging. An alternative approach, inspired by global illumination, uses 2D ray tracing to independently compute each pixel value. This formulation allows trivial parallelism, but it densely computes values even in smooth regions and sacrifices support for instancing and layering. We describe a sparse, ray traced, multi‐layer framework that incorporates many complementary benefits of these existing approaches. Our solution avoids the need for a global solve and trivially allows parallel GPU implementation. We leverage an intermediate triangular representation with cubic patches to synthesize smooth images faithful to the per‐pixel solution. The triangle mesh provides a resolution‐independent, vectorial representation and naturally maps diffusion curve images to a form natively supported by standard vector graphics and triangle rasterization pipelines. Our approach supports many features which were previously difficult to incorporate into a single system, including instancing, layering, alpha blending, texturing, local blurring, continuity control and parallel computation. We also show how global diffusion curves can be combined with local painted strokes in one coherent system.
  • Item
    Seamless, Static Multi‐Texturing of 3D Meshes
    (Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd., 2015) Pagés, R.; Berjón, D.; Morán, F.; García, N.; Deussen, Oliver and Zhang, Hao (Richard)
    In the context of 3D reconstruction, we present a static multi‐texturing system yielding a seamless texture atlas calculated by combining the colour information from several photos from the same subject covering most of its surface. These pictures can be provided by shooting just one camera several times when reconstructing a static object, or a set of synchronized cameras, when dealing with a human or any other moving object. We suppress the colour seams due to image misalignments and irregular lighting conditions that multi‐texturing approaches typically suffer from, while minimizing the blurring effect introduced by colour blending techniques. Our system is robust enough to compensate for the almost inevitable inaccuracies of 3D meshes obtained with visual hull–based techniques: errors in silhouette segmentation, inherently bad handling of concavities, etc.In the context of 3D reconstruction, we present a static multi‐texturing system yielding a seamless texture atlas calculated by combining the colour information from several photos from the same subject covering most of its surface. These pictures can be provided by shooting just one camera several times when reconstructing a static object, or a set of synchronized cameras, when dealing with a human or any other moving object. We suppress the colour seams due to image misalignments and irregular lighting conditions that multi‐texturing approaches typically suffer from, while minimizing the blurring effect introduced by colour blending techniques.
  • Item
    Partial Shape Matching Using Transformation Parameter Similarity
    (Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd., 2015) Guerrero, Paul; Auzinger, Thomas; Wimmer, Michael; Jeschke, Stefan; Deussen, Oliver and Zhang, Hao (Richard)
    In this paper, we present a method for non‐rigid, partial shape matching in vector graphics. Given a user‐specified query region in a 2D shape, similar regions are found, even if they are non‐linearly distorted. Furthermore, a non‐linear mapping is established between the query regions and these matches, which allows the automatic transfer of editing operations such as texturing. This is achieved by a two‐step approach. First, pointwise correspondences between the query region and the whole shape are established. The transformation parameters of these correspondences are registered in an appropriate transformation space. For transformations between similar regions, these parameters form surfaces in transformation space, which are extracted in the second step of our method. The extracted regions may be related to the query region by a non‐rigid transform, enabling non‐rigid shape matching.In this paper, we present a method for non‐rigid, partial shape matching in vector graphics. Given a user‐specified query region in a 2D shape, similar regions are found, even if they are non‐linearly distorted. Furthermore, a non‐linear mapping is established between the query regions and these matches, which allows the automatic transfer of editing operations such as texturing. This is achieved by a two‐step approach. First, pointwise correspondences between the query region and the whole shape are established. The transformation parameters of these correspondences are registered in an appropriate transformation space. For transformations between similar regions, these parameters form surfaces in transformation space, which are extracted in the second step of our method. The extracted regions may be related to the query region by a non‐rigid transform, enabling non‐rigid shape matching.
  • Item
    Visualizing the Evolution of Communities in Dynamic Graphs
    (Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd., 2015) Vehlow, C.; Beck, F.; Auwärter, P.; Weiskopf, D.; Deussen, Oliver and Zhang, Hao (Richard)
    The community structure of graphs is an important feature that gives insight into the high‐level organization of objects within the graph. In real‐world systems, the graph topology is oftentimes not static but changes over time and hence, also the community structure changes. Previous timeline‐based approaches either visualize the dynamic graph or the dynamic community structure. In contrast, our approach combines both in a single image and therefore allows users to investigate the community structure together with the underlying dynamic graph. Our optimized ordering of vertices and selection of colours in combination with interactive highlighting techniques increases the traceability of communities along the time axis. Users can identify visual signatures, estimate the reliability of the derived community structure and investigate whether community evolution interacts with changes in the graph topology. The utility of our approach is demonstrated in two application examples.The community structure of graphs is an important feature that gives insight into the high‐level organization of objects within the graph. In real‐world systems, the graph topology is oftentimes not static but changes over time and hence, also the community structure changes. Previous timeline‐based approaches either visualize the dynamic graph or the dynamic community structure. In contrast, our approach combines both in a single image and therefore allows users to investigate the community structure together with the underlying dynamic graph. Our optimized ordering of vertices and selection of colours in combination with interactive highlighting techniques increases the traceability of communities along the time axis. Users can identify visual signatures, estimate the reliability of the derived community structure and investigate whether community evolution interacts with changes in the graph topology. The utility of our approach is demonstrated in two application examples.
  • Item
    Issue Information
    (Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd., 2015) Deussen, Oliver and Zhang, Hao (Richard)
  • Item
    Sample‐Based Manifold Filtering for Interactive Global Illumination and Depth of Field
    (Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd., 2015) Bauszat, P.; Eisemann, M.; John, S.; Magnor, M.; Deussen, Oliver and Zhang, Hao (Richard)
    We present a fast reconstruction filtering method for images generated with Monte Carlo–based rendering techniques. Our approach specializes in reducing global illumination noise in the presence of depth‐of‐field effects at very low sampling rates and interactive frame rates. We employ edge‐aware filtering in the sample space to locally improve outgoing radiance of each sample. The improved samples are then distributed in the image plane using a fast, linear manifold‐based approach supporting very large circles of confusion. We evaluate our filter by applying it to several images containing noise caused by Monte Carlo–simulated global illumination, area light sources and depth of field. We show that our filter can efficiently denoise such images at interactive frame rates on current GPUs and with as few as 4–16 samples per pixel. Our method operates only on the colour and geometric sample information output of the initial rendering process. It does not make any assumptions on the underlying rendering technique and sampling strategy and can therefore be implemented completely as a post‐process filter.We present a fast reconstruction filtering method for images generated with Monte Carlo–based rendering techniques. Our approach specializes in reducing global illumination noise in the presence of depth‐of‐field effects at very low sampling rates and interactive frame rates. We employ edge‐aware filtering in the sample space to locally improve outgoing radiance of each sample. The improved samples are then distributed in the image plane using a fast, linear manifold‐based approach supporting very large circles of confusion. We evaluate our filter by applying it to several images containing noise caused by Monte Carlo–simulated global illumination, area light sources and depth of field. We show that our filter can efficiently denoise such images at interactive frame rates on current GPUs and with as few as 4–16 spp.