35-Issue 8

Permanent URI for this collection

Issue Information

Issue Information

Articles

Reducing Lateral Visual Biases in Displays

Huberman, Inbar
Fattal, Raanan
Articles

A Procedural Approach to Modelling Virtual Climbing Plants With Tendrils

Wong, Sai‐Keung
Chen, Kai‐Chun
Articles

A Virtual Director Using Hidden Markov Models

Merabti, B.
Christie, M.
Bouatouch, K.
Articles

A Survey of Real‐Time Crowd Rendering

Beacco, A.
Pelechano, N.
Andújar, C.
Articles

Performance Comparison of Bounding Volume Hierarchies and Kd‐Trees for GPU Ray Tracing

Vinkler, Marek
Havran, Vlastimil
Bittner, Jiří
Articles

Memory‐Efficient Interactive Online Reconstruction From Depth Image Streams

Reichl, F.
Weiss, J.
Westermann, R.
Reviewers

Reviewers

Articles

Visualizing Waypoints‐Constrained Origin‐Destination Patterns for Massive Transportation Data

Zeng, W.
Fu, C.‐W.
Müller Arisona, S.
Erath, A.
Qu, H.
Articles

Recognition‐Difficulty‐Aware Hidden Images Based on Clue‐Map

Zhao, Yandan
Du, Hui
Jin, Xiaogang


BibTeX (35-Issue 8)
                
@article{
10.1111:cgf.13077,
journal = {Computer Graphics Forum}, title = {{
Issue Information}},
author = {}, year = {
2016},
publisher = {
© 2016 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13077}
}
                
@article{
10.1111:cgf.12739,
journal = {Computer Graphics Forum}, title = {{
Reducing Lateral Visual Biases in Displays}},
author = {
Huberman, Inbar
 and
Fattal, Raanan
}, year = {
2016},
publisher = {
© 2016 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.12739}
}
                
@article{
10.1111:cgf.12736,
journal = {Computer Graphics Forum}, title = {{
A Procedural Approach to Modelling Virtual Climbing Plants With Tendrils}},
author = {
Wong, Sai‐Keung
 and
Chen, Kai‐Chun
}, year = {
2016},
publisher = {
© 2016 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.12736}
}
                
@article{
10.1111:cgf.12775,
journal = {Computer Graphics Forum}, title = {{
A Virtual Director Using Hidden Markov Models}},
author = {
Merabti, B.
 and
Christie, M.
 and
Bouatouch, K.
}, year = {
2016},
publisher = {
© 2016 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.12775}
}
                
@article{
10.1111:cgf.12774,
journal = {Computer Graphics Forum}, title = {{
A Survey of Real‐Time Crowd Rendering}},
author = {
Beacco, A.
 and
Pelechano, N.
 and
Andújar, C.
}, year = {
2016},
publisher = {
© 2016 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.12774}
}
                
@article{
10.1111:cgf.12776,
journal = {Computer Graphics Forum}, title = {{
Performance Comparison of Bounding Volume Hierarchies and Kd‐Trees for GPU Ray Tracing}},
author = {
Vinkler, Marek
 and
Havran, Vlastimil
 and
Bittner, Jiří
}, year = {
2016},
publisher = {
© 2016 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.12776}
}
                
@article{
10.1111:cgf.12779,
journal = {Computer Graphics Forum}, title = {{
Memory‐Efficient Interactive Online Reconstruction From Depth Image Streams}},
author = {
Reichl, F.
 and
Weiss, J.
 and
Westermann, R.
}, year = {
2016},
publisher = {
© 2016 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.12779}
}
                
@article{
10.1111:cgf.13078,
journal = {Computer Graphics Forum}, title = {{
Reviewers}},
author = {}, year = {
2016},
publisher = {
© 2016 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13078}
}
                
@article{
10.1111:cgf.12778,
journal = {Computer Graphics Forum}, title = {{
Visualizing Waypoints‐Constrained Origin‐Destination Patterns for Massive Transportation Data}},
author = {
Zeng, W.
 and
Fu, C.‐W.
 and
Müller Arisona, S.
 and
Erath, A.
 and
Qu, H.
}, year = {
2016},
publisher = {
© 2016 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.12778}
}
                
@article{
10.1111:cgf.12777,
journal = {Computer Graphics Forum}, title = {{
Recognition‐Difficulty‐Aware Hidden Images Based on Clue‐Map}},
author = {
Zhao, Yandan
 and
Du, Hui
 and
Jin, Xiaogang
}, year = {
2016},
publisher = {
© 2016 The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.12777}
}

Browse

Recent Submissions

Now showing 1 - 10 of 10
  • Item
    Issue Information
    (© 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Chen, Min and Zhang, Hao (Richard)
  • Item
    Reducing Lateral Visual Biases in Displays
    (© 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Huberman, Inbar; Fattal, Raanan; Chen, Min and Zhang, Hao (Richard)
    The human visual system is composed of multiple physiological components that apply multiple mechanisms in order to cope with the rich visual content it encounters. The complexity of this system leads to non‐trivial relations between what we see and what we perceive, and in particular, between the raw intensities of an image that we display and the ones we perceive where various visual biases and illusions are introduced. In this paper, we describe a method for reducing a large class of biases related to the lateral inhibition mechanism in the human retina where neurons suppress the activity of neighbouring receptors. Among these biases are the well‐known Mach bands and halos that appear around smooth and sharp image gradients as well as the appearance of false contrasts between identical regions. The new method removes these visual biases by computing an image that contains counter biases such that when this image is viewed on a display, the inserted biases cancel the ones created in the retina. User study results confirm the usefulness of the new approach for displaying various classes of images, visualizing physical data more faithfully and improving the ability to perceive constancy in brightness.The human visual system is composed of multiple physiological components that apply multiple mechanisms in order to cope with the rich visual content it encounters. The complexity of this system leads to non‐trivial relations between what we see and what we perceive, and in particular, between the raw intensities of an image that we display and the ones we perceive where various visual biases and illusions are introduced. In this paper, we describe a method for reducing a large class of biases related to the lateral inhibition mechanism in the human retina where neurons suppress the activity of neighbouring receptors.
  • Item
    A Procedural Approach to Modelling Virtual Climbing Plants With Tendrils
    (© 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Wong, Sai‐Keung; Chen, Kai‐Chun; Chen, Min and Zhang, Hao (Richard)
    Climbing plants with tendrils show search and coiling behaviour. A tendril searches for a host object and then twines around it. Subsequently, the tendril coils to pull the main stem of the climbing plant close to the host object. Furthermore, the stems may also twine around the host object. In this paper, we propose a procedural approach to incrementally constructing virtual climbing plants with tendrils that mimic such behaviour. We developed several simple rules to guide the construction process. Although our approach is not based on a physical or biological concept, it is fast and efficient in generating climbing plants with tendrils, with acceptable quality. We propose techniques that are useful for enhancing the realism of climbing plants in close‐up view.Climbing plants with tendrils show search and coiling behaviour. A tendril searches for a host object and then twines around it. Subsequently, the tendril coils to pull the main stem of the climbing plant close to the host object. Furthermore, the stems may also twine around the host object. In this paper, we propose a procedural approach to incrementally constructing virtual climbing plants with tendrils that mimic such behaviour. We developed several simple rules to guide the construction process.
  • Item
    A Virtual Director Using Hidden Markov Models
    (© 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Merabti, B.; Christie, M.; Bouatouch, K.; Chen, Min and Zhang, Hao (Richard)
    Automatically computing a cinematographic consistent sequence of shots over a set of actions occurring in a 3D world is a complex task which requires not only the computation of appropriate shots (viewpoints) and appropriate transitions between shots (cuts), but the ability to encode and reproduce elements of cinematographic style. Models proposed in the literature, generally based on finite state machine or idiom‐based representations, provide limited functionalities to build sequences of shots. These approaches are not designed in mind to easily learn elements of cinematographic style, nor do they allow to perform significant variations in style over the same sequence of actions. In this paper, we propose a model for automated cinematography that can compute significant variations in terms of cinematographic style, with the ability to control the duration of shots and the possibility to add specific constraints to the desired sequence. The model is parametrized in a way that facilitates the application of learning techniques. By using a Hidden Markov Model representation of the editing process, we demonstrate the possibility of easily reproducing elements of style extracted from real movies. Results comparing our model with state‐of‐the‐art first‐order Markovian representations illustrate these features, and robustness of the learning technique is demonstrated through cross‐validation.Automatically computing a cinematographic consistent sequence of shots over a set of actions occurring in a 3D world is a complex task which requires not only the computation of appropriate shots (viewpoints) and appropriate transitions between shots (cuts), but the ability to encode and reproduce elements of cinematographic style. Models proposed in the literature, generally based on finite state machine or idiom‐based representations, provide limited functionalities to build sequences of shots. These approaches are not designed in mind to easily learn elements of cinematographic style, nor do they allow to perform significant variations in style over the same sequence of actions. In this paper, we propose a model for automated cinematography that can compute significant variations in terms of cinematographic style, with the ability to control the duration of shots and the possibility to add specific constraints to the desired sequence. The model is parametrized in a way that facilitates the application of learning techniques. By using a Hidden Markov Model representation of the editing process, we demonstrate the possibility of easily reproducing elements of style extracted from real movies. Results comparing our model with state‐of‐the‐art first‐order Markovian representations illustrate these features, and robustness of the learning technique is demonstrated through cross‐validation.
  • Item
    A Survey of Real‐Time Crowd Rendering
    (© 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Beacco, A.; Pelechano, N.; Andújar, C.; Chen, Min and Zhang, Hao (Richard)
    In this survey we review, classify and compare existing approaches for real‐time crowd rendering. We first overview character animation techniques, as they are highly tied to crowd rendering performance, and then we analyze the state of the art in crowd rendering. We discuss different representations for level‐of‐detail (LoD) rendering of animated characters, including polygon‐based, point‐based, and image‐based techniques, and review different criteria for runtime LoD selection. Besides LoD approaches, we review classic acceleration schemes, such as frustum culling and occlusion culling, and describe how they can be adapted to handle crowds of animated characters. We also discuss specific acceleration techniques for crowd rendering, such as primitive pseudo‐instancing, palette skinning, and dynamic key‐pose caching, which benefit from current graphics hardware. We also address other factors affecting performance and realism of crowds such as lighting, shadowing, clothing and variability. Finally we provide an exhaustive comparison of the most relevant approaches in the field.In this survey we review, classify and compare existing approaches for real‐time crowd rendering. We first overview character animation techniques, as they are highly tied to crowd rendering performance, and then we analyze the state of the art in crowd rendering. We discuss different representations for level‐of‐detail (LoD) rendering of animated characters, including polygon‐based, point‐based, and image‐based techniques, and review different criteria for runtime LoD selection. Besides LoD approaches, we review classic acceleration schemes, such as frustum culling and occlusion culling, and describe how they can be adapted to handle crowds of animated characters.
  • Item
    Performance Comparison of Bounding Volume Hierarchies and Kd‐Trees for GPU Ray Tracing
    (© 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Vinkler, Marek; Havran, Vlastimil; Bittner, Jiří; Chen, Min and Zhang, Hao (Richard)
    We present a performance comparison of bounding volume hierarchies and kd‐trees for ray tracing on many‐core architectures (GPUs). The comparison is focused on rendering times and traversal characteristics on the GPU using data structures that were optimized for very high performance of tracing rays. To achieve low rendering times, we extensively examine the constants used in termination criteria for the two data structures. We show that for a contemporary GPU architecture (NVIDIA Kepler) bounding volume hierarchies have higher ray tracing performance than kd‐trees for simple and moderately complex scenes. On the other hand, kd‐trees have higher performance for complex scenes, in particular for those with high depth complexity. Finally, we analyse the causes of the performance discrepancies using the profiling characteristics of the ray tracing kernels.We present a performance comparison of bounding volume hierarchies and kd‐trees for ray tracing on many‐core architectures (GPUs). The comparison is focused on rendering times and traversal characteristics on the GPU using data structures that were optimized for very high performance of tracing rays. To achieve low rendering times, we extensively examine the constants used in termination criteria for the two data structures. We show that for a contemporary GPU architecture (NVIDIA Kepler) bounding volume hierarchies have higher ray tracing performance than kd‐trees for simple and moderately complex scenes.
  • Item
    Memory‐Efficient Interactive Online Reconstruction From Depth Image Streams
    (© 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Reichl, F.; Weiss, J.; Westermann, R.; Chen, Min and Zhang, Hao (Richard)
    We describe how the pipeline for 3D online reconstruction using commodity depth and image scanning hardware can be made scalable for large spatial extents and high scanning resolutions. Our modified pipeline requires less than 10% of the memory that is required by previous approaches at similar speed and resolution. To achieve this, we avoid storing a 3D distance field and weight map during online scene reconstruction. Instead, surface samples are binned into a high‐resolution binary voxel grid. This grid is used in combination with caching and deferred processing of depth images to reconstruct the scene geometry. For pose estimation, GPU ray‐casting is performed on the binary voxel grid. A one‐to‐one comparison to level‐set ray‐casting in a distance volume indicates slightly lower pose accuracy. To enable unlimited spatial extents and store acquired samples at the appropriate level of detail, we combine a hash map with a hierarchical tree representation.
  • Item
    Reviewers
    (© 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Chen, Min and Zhang, Hao (Richard)
  • Item
    Visualizing Waypoints‐Constrained Origin‐Destination Patterns for Massive Transportation Data
    (© 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Zeng, W.; Fu, C.‐W.; Müller Arisona, S.; Erath, A.; Qu, H.; Chen, Min and Zhang, Hao (Richard)
    Origin‐destination (OD) pattern is a highly useful means for transportation research since it summarizes urban dynamics and human mobility. However, existing visual analytics are insufficient for certain OD analytical tasks needed in transport research. For example, transport researchers are interested in path‐related movements across congested roads, besides global patterns over the entire domain. Driven by this need, we propose , a new approach for exploring path‐related OD patterns in an urban transportation network. First, we use hashing‐based query to support interactive filtering of trajectories through user‐specified waypoints. Second, we elaborate a set of design principles and rules, and derive a novel unified visual representation called the by carefully considering the OD flow presentation, the temporal variation, spatial layout and user interaction. Finally, we demonstrate the effectiveness of our interface with two case studies and expert interviews with five transportation experts.Origin‐destination (OD) pattern is a highly useful means for transportation research since it summarizes urban dynamics and human mobility. However, existing visual analytics are insufficient for certain OD analytical tasks needed in transport research. For example, transport researchers are interested in path‐related movements across congested roads, besides global patterns over the entire domain. Driven by this need, we propose waypoints‐constrained ODvisual analytics, a new approach for exploring path‐related OD patterns in an urban transportation network. First, we use hashing‐based query to support interactive filtering of trajectories through user‐specified waypoints.
  • Item
    Recognition‐Difficulty‐Aware Hidden Images Based on Clue‐Map
    (© 2016 The Eurographics Association and John Wiley & Sons Ltd., 2016) Zhao, Yandan; Du, Hui; Jin, Xiaogang; Chen, Min and Zhang, Hao (Richard)
    Hidden images contain one or several concealed foregrounds which can be recognized with the assistance of clues preserved by artists. Experienced artists are trained for years to be skilled enough to find appropriate hidden positions for a given image. However, it is not an easy task for amateurs to quickly find these positions when they try to create satisfactory hidden images. In this paper, we present an interactive framework to suggest the hidden positions and corresponding results. The suggested results generated by our approach are sequenced according to the levels of their recognition difficulties. To this end, we propose a novel approach for assessing the levels of recognition difficulty of the hidden images and a new hidden image synthesis method that takes spatial influence into account to make the foreground harmonious with the local surroundings. During the synthesis stage, we extract the characteristics of the foreground as the clues based on the visual attention model. We validate the effectiveness of our approach by performing two user studies, including the quality of the hidden images and the suggestion accuracy.Hidden images contain one or several concealed foregrounds which can be recognized with the assistance of clues preserved by artists. Experienced artists are trained for years to be skilled enough to find appropriate hidden positions for a given image. However, it is not an easy task for amateurs to quickly find these positions when they try to create satisfactory hidden images. In this paper, we present an interactive framework to suggest the hidden positions and corresponding results. The suggested results generated by our approach are sequenced according to the levels of their recognition difficulties.