34-Issue 6

Permanent URI for this collection

Issue Information

Issue Information

Articles

Supporting Focus and Context Awareness in 3D Modelling Tasks Using Multi‐Layered Displays

Masoodian, M.
Yusof, A. b. Mohd
Rogers, B.
Articles

Outdoor Human Motion Capture by Simultaneous Optimization of Pose and Camera Parameters

Elhayek, A.
Stoll, C.
Kim, K. I.
Theobalt, C.
Articles

Trivariate Biharmonic B‐Splines

Hou, Fei
Qin, Hong
Hao, Aimin
Articles

Forecast Verification and Visualization based on Gaussian Mixture Model Co‐estimation

Wang, Y. H.
Fan, C. R.
Zhang, J.
Niu, T.
Zhang, S.
Jiang, J. R.
Articles

Computing Minimum Area Homologies

Chambers, Erin Wolf
Vejdemo‐Johansson, Mikael
Articles

Accurate Computation of Single Scattering in Participating Media with Refractive Boundaries

Holzschuch, N.
Articles

Interactive Procedural Modelling of Coherent Waterfall Scenes

Emilien, Arnaud
Poulin, Pierre
Cani, Marie‐Paule
Vimont, Ulysse
Articles

A Survey on Data‐Driven Video Completion

Ilan, S.
Shamir, A.
Articles

Optimization‐Based Gradient Mesh Colour Transfer

Xiao, Yi
Wan, Liang
Leung, Chi Sing
Lai, Yu‐Kun
Wong, Tien‐Tsin
Articles

A Survey of Physically Based Simulation of Cuts in Deformable Bodies

Wu, Jun
Westermann, Rüdiger
Dick, Christian
Articles

Separable Subsurface Scattering

Jimenez, Jorge
Zsolnai, Károly
Jarabo, Adrian
Freude, Christian
Auzinger, Thomas
Wu, Xian‐Chun
der Pahlen, Javier
Wimmer, Michael
Gutierrez, Diego
Articles

Saliency‐Preserving Slicing Optimization for Effective 3D Printing

Wang, Weiming
Chao, Haiyuan
Tong, Jing
Yang, Zhouwang
Tong, Xin
Li, Hang
Liu, Xiuping
Liu, Ligang
Articles

Terrain Modelling from Feature Primitives

Génevaux, Jean‐David
Galin, Eric
Peytavie, Adrien
Guérin, Eric
Briquet, Cyril
Grosbellet, François
Benes, Bedrich
Articles

Specular Lobe‐Aware Filtering and Upsampling for Interactive Indirect Illumination

Tokuyoshi, Y.
Articles

Non‐Local Image Inpainting Using Low‐Rank Matrix Completion

Li, Wei
Zhao, Lei
Lin, Zhijie
Xu, Duanqing
Lu, Dongming
Articles

Position‐Based Skinning for Soft Articulated Characters

Abu Rumman, Nadine
Fratarcangeli, Marco
Articles

Structure‐Aware Mesh Decimation

Salinas, D.
Lafarge, F.
Alliez, P.
Articles

Shading Curves: Vector-Based Drawing With Explicit Gradient Control

Lieng, Henrik
Tasse, Flora
Kosinka, Jiří
Dodgson, Neil A.
Articles

Fast Rendering of Image Mosaics and ASCII Art

Markuš, Nenad
Fratarcangeli, Marco
Pandžić, Igor S.
Ahlberg, Jörgen
Articles

Emotion Analysis and Classification: Understanding the Performers' Emotions Using the LMA Entities

Aristidou, Andreas
Charalambous, Panayiotis
Chrysanthou, Yiorgos
Articles

AppFusion: Interactive Appearance Acquisition Using a Kinect Sensor

Wu, Hongzhi
Zhou, Kun
Articles

Convolution Filtering of Continuous Signed Distance Fields for Polygonal Meshes

Sanchez, Mathieu
Fryazinov, Oleg
Fayolle, Pierre‐Alain
Pasko, Alexander
Articles

A Review of Eye Gaze in Virtual Agents, Social Robotics and HCI: Behaviour Generation, User Interaction and Perception

Ruhland, K.
Peters, C. E.
Andrist, S.
Badler, J. B.
Badler, N. I.
Gleicher, M.
Mutlu, B.
McDonnell, R.
Erratum

Erratum



BibTeX (34-Issue 6)
                
@article{
10.1111:cgf.12738,
journal = {Computer Graphics Forum}, title = {{
Issue Information}},
author = {}, year = {
2015},
publisher = {
Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12738}
}
                
@article{
10.1111:cgf.12513,
journal = {Computer Graphics Forum}, title = {{
Supporting Focus and Context Awareness in 3D Modelling Tasks Using Multi‐Layered Displays}},
author = {
Masoodian, M.
and
Yusof, A. b. Mohd
and
Rogers, B.
}, year = {
2015},
publisher = {
Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12513}
}
                
@article{
10.1111:cgf.12519,
journal = {Computer Graphics Forum}, title = {{
Outdoor Human Motion Capture by Simultaneous Optimization of Pose and Camera Parameters}},
author = {
Elhayek, A.
and
Stoll, C.
and
Kim, K. I.
and
Theobalt, C.
}, year = {
2015},
publisher = {
Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12519}
}
                
@article{
10.1111:cgf.12516,
journal = {Computer Graphics Forum}, title = {{
Trivariate Biharmonic B‐Splines}},
author = {
Hou, Fei
and
Qin, Hong
and
Hao, Aimin
}, year = {
2015},
publisher = {
Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12516}
}
                
@article{
10.1111:cgf.12520,
journal = {Computer Graphics Forum}, title = {{
Forecast Verification and Visualization based on Gaussian Mixture Model Co‐estimation}},
author = {
Wang, Y. H.
and
Fan, C. R.
and
Zhang, J.
and
Niu, T.
and
Zhang, S.
and
Jiang, J. R.
}, year = {
2015},
publisher = {
Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12520}
}
                
@article{
10.1111:cgf.12514,
journal = {Computer Graphics Forum}, title = {{
Computing Minimum Area Homologies}},
author = {
Chambers, Erin Wolf
and
Vejdemo‐Johansson, Mikael
}, year = {
2015},
publisher = {
Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12514}
}
                
@article{
10.1111:cgf.12517,
journal = {Computer Graphics Forum}, title = {{
Accurate Computation of Single Scattering in Participating Media with Refractive Boundaries}},
author = {
Holzschuch, N.
}, year = {
2015},
publisher = {
Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12517}
}
                
@article{
10.1111:cgf.12515,
journal = {Computer Graphics Forum}, title = {{
Interactive Procedural Modelling of Coherent Waterfall Scenes}},
author = {
Emilien, Arnaud
and
Poulin, Pierre
and
Cani, Marie‐Paule
and
Vimont, Ulysse
}, year = {
2015},
publisher = {
Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12515}
}
                
@article{
10.1111:cgf.12518,
journal = {Computer Graphics Forum}, title = {{
A Survey on Data‐Driven Video Completion}},
author = {
Ilan, S.
and
Shamir, A.
}, year = {
2015},
publisher = {
Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12518}
}
                
@article{
10.1111:cgf.12524,
journal = {Computer Graphics Forum}, title = {{
Optimization‐Based Gradient Mesh Colour Transfer}},
author = {
Xiao, Yi
and
Wan, Liang
and
Leung, Chi Sing
and
Lai, Yu‐Kun
and
Wong, Tien‐Tsin
}, year = {
2015},
publisher = {
Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12524}
}
                
@article{
10.1111:cgf.12528,
journal = {Computer Graphics Forum}, title = {{
A Survey of Physically Based Simulation of Cuts in Deformable Bodies}},
author = {
Wu, Jun
and
Westermann, Rüdiger
and
Dick, Christian
}, year = {
2015},
publisher = {
Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12528}
}
                
@article{
10.1111:cgf.12529,
journal = {Computer Graphics Forum}, title = {{
Separable Subsurface Scattering}},
author = {
Jimenez, Jorge
and
Zsolnai, Károly
and
Jarabo, Adrian
and
Freude, Christian
and
Auzinger, Thomas
and
Wu, Xian‐Chun
and
der Pahlen, Javier
and
Wimmer, Michael
and
Gutierrez, Diego
}, year = {
2015},
publisher = {
Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12529}
}
                
@article{
10.1111:cgf.12527,
journal = {Computer Graphics Forum}, title = {{
Saliency‐Preserving Slicing Optimization for Effective 3D Printing}},
author = {
Wang, Weiming
and
Chao, Haiyuan
and
Tong, Jing
and
Yang, Zhouwang
and
Tong, Xin
and
Li, Hang
and
Liu, Xiuping
and
Liu, Ligang
}, year = {
2015},
publisher = {
Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12527}
}
                
@article{
10.1111:cgf.12530,
journal = {Computer Graphics Forum}, title = {{
Terrain Modelling from Feature Primitives}},
author = {
Génevaux, Jean‐David
and
Galin, Eric
and
Peytavie, Adrien
and
Guérin, Eric
and
Briquet, Cyril
and
Grosbellet, François
and
Benes, Bedrich
}, year = {
2015},
publisher = {
Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12530}
}
                
@article{
10.1111:cgf.12525,
journal = {Computer Graphics Forum}, title = {{
Specular Lobe‐Aware Filtering and Upsampling for Interactive Indirect Illumination}},
author = {
Tokuyoshi, Y.
}, year = {
2015},
publisher = {
Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12525}
}
                
@article{
10.1111:cgf.12521,
journal = {Computer Graphics Forum}, title = {{
Non‐Local Image Inpainting Using Low‐Rank Matrix Completion}},
author = {
Li, Wei
and
Zhao, Lei
and
Lin, Zhijie
and
Xu, Duanqing
and
Lu, Dongming
}, year = {
2015},
publisher = {
Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12521}
}
                
@article{
10.1111:cgf.12533,
journal = {Computer Graphics Forum}, title = {{
Position‐Based Skinning for Soft Articulated Characters}},
author = {
Abu Rumman, Nadine
and
Fratarcangeli, Marco
}, year = {
2015},
publisher = {
Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12533}
}
                
@article{
10.1111:cgf.12531,
journal = {Computer Graphics Forum}, title = {{
Structure‐Aware Mesh Decimation}},
author = {
Salinas, D.
and
Lafarge, F.
and
Alliez, P.
}, year = {
2015},
publisher = {
Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12531}
}
                
@article{
10.1111:cgf.12532,
journal = {Computer Graphics Forum}, title = {{
Shading Curves: Vector-Based Drawing With Explicit Gradient Control}},
author = {
Lieng, Henrik
and
Tasse, Flora
and
Kosinka, Jiří
and
Dodgson, Neil A.
}, year = {
2015},
publisher = {
Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12532}
}
                
@article{
10.1111:cgf.12597,
journal = {Computer Graphics Forum}, title = {{
Fast Rendering of Image Mosaics and ASCII Art}},
author = {
Markuš, Nenad
and
Fratarcangeli, Marco
and
Pandžić, Igor S.
and
Ahlberg, Jörgen
}, year = {
2015},
publisher = {
Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12597}
}
                
@article{
10.1111:cgf.12598,
journal = {Computer Graphics Forum}, title = {{
Emotion Analysis and Classification: Understanding the Performers' Emotions Using the LMA Entities}},
author = {
Aristidou, Andreas
and
Charalambous, Panayiotis
and
Chrysanthou, Yiorgos
}, year = {
2015},
publisher = {
Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12598}
}
                
@article{
10.1111:cgf.12600,
journal = {Computer Graphics Forum}, title = {{
AppFusion: Interactive Appearance Acquisition Using a Kinect Sensor}},
author = {
Wu, Hongzhi
and
Zhou, Kun
}, year = {
2015},
publisher = {
Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12600}
}
                
@article{
10.1111:cgf.12599,
journal = {Computer Graphics Forum}, title = {{
Convolution Filtering of Continuous Signed Distance Fields for Polygonal Meshes}},
author = {
Sanchez, Mathieu
and
Fryazinov, Oleg
and
Fayolle, Pierre‐Alain
and
Pasko, Alexander
}, year = {
2015},
publisher = {
Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12599}
}
                
@article{
10.1111:cgf.12603,
journal = {Computer Graphics Forum}, title = {{
A Review of Eye Gaze in Virtual Agents, Social Robotics and HCI: Behaviour Generation, User Interaction and Perception}},
author = {
Ruhland, K.
and
Peters, C. E.
and
Andrist, S.
and
Badler, J. B.
and
Badler, N. I.
and
Gleicher, M.
and
Mutlu, B.
and
McDonnell, R.
}, year = {
2015},
publisher = {
Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12603}
}
                
@article{
10.1111:cgf.12737,
journal = {Computer Graphics Forum}, title = {{
Erratum}},
author = {}, year = {
2015},
publisher = {
Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd.},
DOI = {
10.1111/cgf.12737}
}

Browse

Recent Submissions

Now showing 1 - 25 of 25
  • Item
    Issue Information
    (Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd., 2015) Deussen, Oliver and Zhang, Hao (Richard)
  • Item
    Supporting Focus and Context Awareness in 3D Modelling Tasks Using Multi‐Layered Displays
    (Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd., 2015) Masoodian, M.; Yusof, A. b. Mohd; Rogers, B.; Deussen, Oliver and Zhang, Hao (Richard)
    Most 3D modelling software have been developed for conventional 2D displays, and as such, lack support for true depth perception. This contributes to making polygonal 3D modelling tasks challenging, particularly when models are complex and consist of a large number of overlapping components (e.g. vertices, edges) and objects (i.e. parts). Research has shown that users of 3D modelling software often encounter a range of difficulties, which collectively can be defined as focus and context awareness problems. These include maintaining position and orientation awarenesses, as well as recognizing distance between individual components and objects in 3D spaces. In this paper, we present five visualization and interaction techniques we have developed for multi‐layered displays, to better support focus and context awareness in 3D modelling tasks. The results of a user study we conducted shows that three of these five techniques improve users' 3D modelling task performance.Most 3D modelling software have been developed for conventional 2D displays, and as such, lack support for true depth perception. This contributes to making polygonal 3D modelling tasks challenging, particularly when models are complex and consist of a large number of overlapping components (e.g. vertices, edges) and objects (i.e. parts). Research has shown that users of 3D modelling software often encounter a range of difficulties, which collectively can be defined as focus and context awareness problems. These include maintaining position and orientation awarenesses, as well as recognizing distance between individual components and objects in 3D spaces. In this paper, we present five visualization and interaction techniques we have developed for multi‐layered displays, to better support focus and context awareness in 3D modelling tasks. The results of a user study we conducted shows that three of these five techniques improve users' 3D modelling task performance.
  • Item
    Outdoor Human Motion Capture by Simultaneous Optimization of Pose and Camera Parameters
    (Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd., 2015) Elhayek, A.; Stoll, C.; Kim, K. I.; Theobalt, C.; Deussen, Oliver and Zhang, Hao (Richard)
    We present a method for capturing the skeletal motions of humans using a sparse set of potentially moving cameras in an uncontrolled environment. Our approach is able to track multiple people even in front of cluttered and non‐static backgrounds, and unsynchronized cameras with varying image quality and frame rate. We completely rely on optical information and do not make use of additional sensor information (e.g. depth images or inertial sensors). Our algorithm simultaneously reconstructs the skeletal pose parameters of multiple performers and the motion of each camera. This is facilitated by a new energy functional that captures the alignment of the model and the camera positions with the input videos in an analytic way. The approach can be adopted in many practical applications to replace the complex and expensive motion capture studios with few consumer‐grade cameras even in uncontrolled outdoor scenes. We demonstrate this based on challenging multi‐view video sequences that are captured with unsynchronized and moving (e.g. mobile‐phone or ) cameras.We present a method for capturing the skeletal motions of humans using a sparse set of potentially moving cameras in an uncontrolled environment. Our approach is able to track multiple people even in front of cluttered and non‐static backgrounds, and unsynchronized cameras with varying image quality and frame rate. We completely rely on optical information and do not make use of additional sensor information (e.g. depth images or inertial sensors). Our algorithm simultaneously reconstructs the skeletal pose parameters of multiple performers and the motion of each camera. This is facilitated by a new energy functional that captures the alignment of the model and the camera positions with the input videos in an analytic way. The approach can be adopted in many practical applications to replace the complex and expensive motion capture studios with few consumer‐grade cameras even in uncontrolled outdoor scenes. We demonstrate this based on challenging multi‐view video sequences that are captured with unsynchronized and moving (e.g. mobile‐phone or ) cameras.
  • Item
    Trivariate Biharmonic B‐Splines
    (Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd., 2015) Hou, Fei; Qin, Hong; Hao, Aimin; Deussen, Oliver and Zhang, Hao (Richard)
    .In this paper, we formulate a novel trivariate biharmonic B‐spline defined over bounded volumetric domain. The properties of bi‐Laplacian have been well investigated, but the straightforward generalization from bivariate case to trivariate one gives rise to unsatisfactory discretization, due to the dramatically uneven distribution of neighbouring knots in 3D. To ameliorate, our original idea is to extend the bivariate biharmonic B‐spline to the trivariate one with novel formulations based on quadratic programming, approximating the properties of localization and partition of unity. And we design a novel discrete biharmonic operator which is optimized more robustly for a specific set of functions for unevenly sampled knots compared with previous methods. Our experiments demonstrate that our 3D discrete biharmonic operators are robust for unevenly distributed knots and illustrate that our algorithm is superior to previous algorithms.
  • Item
    Forecast Verification and Visualization based on Gaussian Mixture Model Co‐estimation
    (Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd., 2015) Wang, Y. H.; Fan, C. R.; Zhang, J.; Niu, T.; Zhang, S.; Jiang, J. R.; Deussen, Oliver and Zhang, Hao (Richard)
    Precipitation forecast verification is essential to the quality of a forecast. The Gaussian mixture model (GMM) can be used to approximate the precipitation of several rain bands and provide a concise view of the data, which is especially useful for comparing forecast and observation data. The robustness of such comparison mainly depends on the consistency of and the correspondence between the extracted rain bands in the forecast and observation data. We propose a novel co‐estimation approach based on GMM in which forecast and observation data are analysed simultaneously. This approach naturally increases the consistency of and correspondence between the extracted rain bands by exploiting the similarity between both forecast and observation data. Moreover, a novel visualization and exploration framework is implemented to help the meteorologists gain insight from the forecast. The proposed approach was applied to the forecast and observation data provided by the China Meteorological Administration. The results are evaluated by meteorologists and novel insight has been gained.Precipitation forecast verification is essential to the quality of a forecast. The Gaussian mixture model (GMM) can be used to approximate the precipitation of several rain bands and provide a concise view of the data, which is especially useful for comparing forecast and observation data. The robustness of such comparison mainly depends on the consistency of and the correspondence between the extracted rain bands in the forecast and observation data. We propose a novel co‐estimation approach based on GMM in which forecast and observation data are analysed simultaneously.
  • Item
    Computing Minimum Area Homologies
    (Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd., 2015) Chambers, Erin Wolf; Vejdemo‐Johansson, Mikael; Deussen, Oliver and Zhang, Hao (Richard)
    Calculating and categorizing the similarity of curves is a fundamental problem which has generated much recent interest. However, to date there are no implementations of these algorithms for curves on surfaces with provable guarantees on the quality of the measure. In this paper, we present a similarity measure for any two cycles that are homologous, where we calculate the minimum area of any homology (or connected bounding chain) between the two cycles. The minimum area homology exists for broader classes of cycles than previous measures which are based on homotopy. It is also much easier to compute than previously defined measures, yielding an efficient implementation that is based on linear algebra tools. We demonstrate our algorithm on a range of inputs, showing examples which highlight the feasibility of this similarity measure.Calculating and categorizing the similarity of curves is a fundamental problem which has generated much recent interest. However, to date there are no implementations of these algorithms for curves on surfaces with provable guarantees on the quality of the measure. In this paper, we present a similarity measure for any two cycles that are homologous, where we calculate the minimum area of any homology (or connected bounding chain) between the two cycles. The minimum area homology exists for broader classes of cycles than previous measures which are based on homotopy. It is also much easier to compute than previously defined measures, yielding an efficient implementation that is based on linear algebra tools. We demonstrate our algorithm on a range of inputs, showing examples which highlight the feasibility of this similarity measure.
  • Item
    Accurate Computation of Single Scattering in Participating Media with Refractive Boundaries
    (Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd., 2015) Holzschuch, N.; Deussen, Oliver and Zhang, Hao (Richard)
    Volume caustics are high‐frequency effects appearing in participating media with low opacity, when refractive interfaces are focusing the light rays. Refractions make them hard to compute, since screen locality does not correlate with spatial locality in the medium. In this paper, we give a new method for accurate computation of single scattering effects in a participating media enclosed by refractive interfaces. Our algorithm is based on the observation that although radiance along each camera ray is irregular, contributions from individual triangles are smooth. Our method gives more accurate results than existing methods, faster. It uses minimal information and requires no pre‐computation or additional data structures.Volume caustics are high‐frequency effects appearing in participating media with low opacity, when refractive interfaces are focusing the light rays. Refractions make them hard to compute, since screen locality does not correlate with spatial locality in the medium. In this paper, we give a new method for accurate computation of single scattering effects in a participating media enclosed by refractive interfaces. Our algorithm is based on the observation that although radiance along each camera ray is irregular, contributions from individual triangles are smooth. Our method gives more accurate results than existing methods, faster. It uses minimal information and requires no pre‐computation or additional data structures.
  • Item
    Interactive Procedural Modelling of Coherent Waterfall Scenes
    (Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd., 2015) Emilien, Arnaud; Poulin, Pierre; Cani, Marie‐Paule; Vimont, Ulysse; Deussen, Oliver and Zhang, Hao (Richard)
    Combining procedural generation and user control is a fundamental challenge for the interactive design of natural scenery. This is particularly true for modelling complex waterfall scenes where, in addition to taking charge of geometric details, an ideal tool should also provide a user with the freedom to shape the running streams and falls, while automatically maintaining physical plausibility in terms of flow network, embedding into the terrain, and visual aspects of the waterfalls. We present the first solution for the interactive procedural design of coherent waterfall scenes. Our system combines vectorial editing, where the user assembles elements to create a waterfall network over an existing terrain, with a procedural model that parametrizes these elements from hydraulic exchanges; enforces consistency between the terrain and the flow; and generates detailed geometry, animated textures and shaders for the waterfalls and their surroundings. The tool is interactive, yielding visual feedback after each edit.Combining procedural generation and user control is a fundamental challenge for the interactive design of natural scenery. This is particularly true for modelling complex waterfall scenes where, in addition to taking charge of geometric details, an ideal tool should also provide a user with the freedom to shape the running streams and falls, while automatically maintaining physical plausibility in terms of flow network, embedding into the terrain, and visual aspects of the waterfalls.
  • Item
    A Survey on Data‐Driven Video Completion
    (Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd., 2015) Ilan, S.; Shamir, A.; Deussen, Oliver and Zhang, Hao (Richard)
    Image completion techniques aim to complete selected regions of an image in a natural looking manner with little or no user interaction. Video Completion, the space–time equivalent of the image completion problem, inherits and extends both the difficulties and the solutions of the original 2D problem, but also imposes new ones—mainly temporal coherency and space complexity (videos contain significantly more information than images). Data‐driven approaches to completion have been established as a favoured choice, especially when large regions have to be filled. In this survey, we present the current state of the art in data‐driven video completion techniques. For unacquainted researchers, we aim to provide a broad yet easy to follow introduction to the subject (including an extensive review of the image completion foundations) and early guidance to the challenges ahead. For a versed reader, we offer a comprehensive review of the contemporary techniques, sectioned out by their approaches to key aspects of the problem.Image completion techniques aim to complete selected regions of an image in a natural looking manner with little or no user interaction.Video Completion, the space–time equivalent of the image completion problem, inherits and extends both the difficulties and the solutions of the original 2D problem, but also imposes new ones—mainly temporal coherency and space complexity (videos contain significantly more information than images). Data‐driven approaches to completion have been established as a favoured choice, especially when large regions have to be filled. In this survey, we present the current state of the art in data‐driven video completion techniques. For unacquainted researchers, we aim to provide a broad yet easy to follow introduction to the subject (including an extensive review of the image completion foundations) and early guidance to the challenges ahead. For a versed reader, we offer a comprehensive review of the contemporary techniques, sectioned out by their approaches to key aspects of the problem.
  • Item
    Optimization‐Based Gradient Mesh Colour Transfer
    (Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd., 2015) Xiao, Yi; Wan, Liang; Leung, Chi Sing; Lai, Yu‐Kun; Wong, Tien‐Tsin; Deussen, Oliver and Zhang, Hao (Richard)
    In vector graphics, gradient meshes represent an image object by one or more regularly connected grids. Every grid point has attributes as the position, colour and gradients of these quantities specified. Editing the attributes of an existing gradient mesh (such as the colour gradients) is not only non‐intuitive but also time‐consuming. To facilitate user‐friendly colour editing, we develop an optimization‐based colour transfer method for gradient meshes. The key idea is built on the fact that we can approximate a colour transfer operation on gradient meshes with a linear transfer function. In this paper, we formulate the approximation as an optimization problem, which aims to minimize the colour distribution of the example image and the transferred gradient mesh. By adding proper constraints, i.e. image gradients, to the optimization problem, the details of the gradient meshes can be better preserved. With the linear transfer function, we are able to edit the of the mesh points automatically, while preserving the structure of the gradient mesh. The experimental results show that our method can generate pleasing recoloured gradient meshes.In vector graphics, gradient meshes represent an image object by one or more regularly connected grids. Every grid point has attributes as the position, colour and gradients of these quantities specified. Editing the attributes of an existing gradient mesh (such as the colour gradients) is not only non‐intuitive but also time‐consuming. To facilitate user‐friendly colour editing, we develop an optimization‐based colour transfer method for gradient meshes. The key idea is built on the fact that we can approximate a colour transfer operation on gradient meshes with a linear transfer function. In this paper, we formulate the approximation as an optimization problem, which aims to minimize the colour distribution of the example image and the transferred gradient mesh. By adding proper constraints, i.e. image gradients, to the optimization problem, the details of the gradient meshes can be better preserved.
  • Item
    A Survey of Physically Based Simulation of Cuts in Deformable Bodies
    (Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd., 2015) Wu, Jun; Westermann, Rüdiger; Dick, Christian; Deussen, Oliver and Zhang, Hao (Richard)
    Virtual cutting of deformable bodies has been an important and active research topic in physically based modelling and simulation for more than a decade. A particular challenge in virtual cutting is the robust and efficient incorporation of cuts into an accurate computational model that is used for the simulation of the deformable body. This report presents a coherent summary of the state of the art in virtual cutting of deformable bodies, focusing on the distinct geometrical and topological representations of the deformable body, as well as the specific numerical discretizations of the governing equations of motion. In particular, we discuss virtual cutting based on tetrahedral, hexahedral and polyhedral meshes, in combination with standard, polyhedral, composite and extended finite element discretizations. A separate section is devoted to meshfree methods. Furthermore, we discuss cutting‐related research problems such as collision detection and haptic rendering in the context of interactive cutting scenarios. The report is complemented with an application study to assess the performance of virtual cutting simulators.Virtual cutting of deformable bodies has been an important and active research topic in physically based modelling and simulation for more than a decade. A particular challenge in virtual cutting is the robust and efficient incorporation of cuts into an accurate computational model that is used for the simulation of the deformable body. This report presents a coherent summary of the state of the art in virtual cutting of deformable bodies, focusing on the distinct geometrical and topological representations of the deformable body, as well as the specific numerical discretizations of the governing equations of motion. In particular, we discuss virtual cutting based on tetrahedral, hexahedral and polyhedral meshes, in combination with standard, polyhedral, composite and extended finite element discretizations. A separate section is devoted to meshfree methods. Furthermore, we discuss cutting‐related research problems such as collision detection and haptic rendering in the context of interactive cutting scenarios. The report is complemented with an application study to assess the performance of virtual cutting simulators.
  • Item
    Separable Subsurface Scattering
    (Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd., 2015) Jimenez, Jorge; Zsolnai, Károly; Jarabo, Adrian; Freude, Christian; Auzinger, Thomas; Wu, Xian‐Chun; der Pahlen, Javier; Wimmer, Michael; Gutierrez, Diego; Deussen, Oliver and Zhang, Hao (Richard)
    In this paper, we propose two real‐time models for simulating subsurface scattering for a large variety of translucent materials, which need under 0.5 ms per frame to execute. This makes them a practical option for real‐time production scenarios. Current state‐of‐the‐art, real‐time approaches simulate subsurface light transport by approximating the radially symmetric non‐separable diffusion kernel with a sum of separable Gaussians, which requires multiple (up to 12) 1D convolutions. In this work we relax the requirement of radial symmetry to approximate a 2D diffuse reflectance profile by a single separable kernel. We first show that low‐rank approximations based on matrix factorization outperform previous approaches, but they still need several passes to get good results. To solve this, we present two different separable models: the first one yields a high‐quality diffusion simulation, while the second one offers an attractive trade‐off between physical accuracy and artistic control. Both allow rendering of subsurface scattering using only two 1D convolutions, reducing both execution time and memory consumption, while delivering results comparable to techniques with higher cost. Using our importance‐sampling and jittering strategies, only seven samples per pixel are required. Our methods can be implemented as simple post‐processing steps without intrusive changes to existing rendering pipelines.In this paper, we propose two real‐time models for simulating subsurface scattering of subsurface scattering for a large variety of translucent materials, which need under 0.5 ms per frame to execute. This makes them a practical option for real‐time production scenarios. Current state‐of‐the‐art, real‐time approaches simulate subsurface light transport by approximating the radially symmetric non‐separable diffusion kernel with a sum of separable Gaussians, which requires multiple (up to 12) 1D convolutions. In this work we relax the requirement of radial symmetry to approximate a 2D diffuse reflectance profile by a single separable kernel. We first show that low‐rank approximations based on matrix factorization outperform previous approaches, but they still need several passes to get good results. To solve this, we present two different separable models: the first one yields a high‐quality diffusion simulation, while the second one offers an attractive trade‐off between physical accuracy and artistic control. Both allow rendering of subsurface scattering using only two 1D convolutions, reducing both execution time and memory consumption, while delivering results comparable to techniques with higher cost. Using our importance‐sampling and jittering strategies, only seven samples per pixel are required.
  • Item
    Saliency‐Preserving Slicing Optimization for Effective 3D Printing
    (Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd., 2015) Wang, Weiming; Chao, Haiyuan; Tong, Jing; Yang, Zhouwang; Tong, Xin; Li, Hang; Liu, Xiuping; Liu, Ligang; Deussen, Oliver and Zhang, Hao (Richard)
    We present an adaptive slicing scheme for reducing the manufacturing time for 3D printing systems. Based on a new saliency‐based metric, our method optimizes the thicknesses of slicing layers to save printing time and preserve the visual quality of the printing results. We formulate the problem as a constrained ℓ optimization and compute the slicing result via a two‐step optimization scheme. To further reduce printing time, we develop a saliency‐based segmentation scheme to partition an object into subparts and then optimize the slicing of each subpart separately. We validate our method with a large set of 3D shapes ranging from CAD models to scanned objects. Results show that our method saves printing time by 30–40% and generates 3D objects that are visually similar to the ones printed with the finest resolution possible.We present an adaptive slicing scheme for reducing the manufacturing time for 3D printing systems. Based on a new saliency‐based metric, our method optimizes the thicknesses of slicing layers to save printing time and preserve the visual quality of the printing results. We formulate the problem as a constrained ℓ optimization and compute the slicing result via a two‐step optimization scheme. To further reduce printing time, we develop a saliency‐based segmentation scheme to partition an object into subparts and then optimize the slicing of each subpart separately. We validate our method with a large set of 3D shapes ranging from CAD models to scanned objects. Results show that our method saves printing time by 30–40% and generates 3D objects that are visually similar to the ones printed with the finest resolution possible.
  • Item
    Terrain Modelling from Feature Primitives
    (Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd., 2015) Génevaux, Jean‐David; Galin, Eric; Peytavie, Adrien; Guérin, Eric; Briquet, Cyril; Grosbellet, François; Benes, Bedrich; Deussen, Oliver and Zhang, Hao (Richard)
    We introduce a compact hierarchical procedural model that combines feature‐based primitives to describe complex terrains with varying level of detail. Our model is inspired by skeletal implicit surfaces and defines the terrain elevation function by using a construction tree. Leaves represent terrain features and they are generic parametrized skeletal primitives, such as mountains, ridges, valleys, rivers, lakes or roads. Inner nodes combine the leaves and subtrees by carving, blending or warping operators. The elevation of the terrain at a given point is evaluated by traversing the tree and by combining the contributions of the primitives. The definition of the tree leaves and operators guarantees that the resulting elevation function is Lipschitz, which speeds up the sphere tracing used to render the terrain. Our model is compact and allows for the creation of large terrains with a high level o detail using a reduced set of primitives. We show the creation of different kinds of landscapes and demonstrate that our model allows to efficiently control the shape and distribution of landform features.We introduce a compact hierarchical procedural model that combines feature‐based primitives to describe complex terrains with varying level of detail. Our model is inspired by skeletal implicit surfaces and defines the terrain elevation function by using a construction tree. Leaves represent terrain features and they are generic parametrized skeletal primitives, such as mountains, ridges, valleys, rivers, lakes or roads. Inner nodes combine the leaves and subtrees by carving, blending or warping operators. The elevation of the terrain at a given point is evaluated by traversing the tree and by combining the contributions of the primitives. The definition of the tree leaves and operators guarantees that the resulting elevation function is Lipschitz, which speeds up the sphere tracing used to render the terrain. Our model is compact and allows for the creation of large terrains with a high level o detail using a reduced set of primitives.
  • Item
    Specular Lobe‐Aware Filtering and Upsampling for Interactive Indirect Illumination
    (Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd., 2015) Tokuyoshi, Y.; Deussen, Oliver and Zhang, Hao (Richard)
    Although geometry‐aware filtering and upsampling have often been used for interactive or real‐time rendering, they are unsuitable for glossy surfaces because shading results strongly depend on the bidirectional reflectance distribution functions. This paper proposes a novel weighting function of cross bilateral filtering and upsampling to measure the similarity of specular lobes. The difficulty is that a specular lobe is represented with a distribution function in directional space, whereas conventional cross bilateral filtering evaluates similarities using the distance between two points in a Euclidean space. Therefore, this paper first generalizes cross bilateral filtering for the similarity of distribution functions in a non‐Euclidean space. Then, the weighting function is specialized for specular lobes. Our key insight is that the weighting function of bilateral filtering can be represented with the product integral of two distribution functions corresponding to two pixels. In addition, we propose spherical Gaussian‐based approximations to calculate this weighting function analytically. Our weighting function detects the edges of glossiness, and adapts to all‐frequency materials using only a camera position and G‐buffer. These features are not only suitable for path tracing, but also deferred shading and non‐ray tracing–based methods such as voxel cone tracing.Although geometry‐aware filtering and upsampling have often been used for interactive or real‐time rendering, they are unsuitable for glossy surfaces because shading results strongly depend on the bidirectional reflectance distribution functions. This paper proposes a novel weighting function of cross bilateral filtering and upsampling to measure the similarity of specular lobes. The difficulty is that a specular lobe is represented with a distribution function in directional space, whereas conventional cross bilateral filtering evaluates similarities using the distance between two points in a Euclidean space. Therefore, this paper first generalizes cross bilateral filtering for the similarity of distribution functions in a non‐Euclidean space. Then, the weighting function is specialized for specular lobes. Our key insight is that the weighting function of bilateral filtering can be represented with the product integral of two distribution functions corresponding to two pixels. In addition, we propose spherical Gaussian‐based approximations to calculate this weighting function analytically.
  • Item
    Non‐Local Image Inpainting Using Low‐Rank Matrix Completion
    (Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd., 2015) Li, Wei; Zhao, Lei; Lin, Zhijie; Xu, Duanqing; Lu, Dongming; Deussen, Oliver and Zhang, Hao (Richard)
    .In this paper, we propose a highly accurate inpainting algorithm which reconstructs an image from a fraction of its pixels. Our algorithm is inspired by the recent progress of non‐local image processing techniques following the idea of ‘grouping and collaborative filtering.’ In our framework, we first match and group similar patches in the input image, and then convert the problem of estimating missing values for the stack of matched patches to the problem of low‐rank matrix completion and finally obtain the result by synthesizing all the restored patches. In our algorithm, how to accurately perform patch matching process and solve the low‐rank matrix completion problem are key points. For the first problem, we propose a robust patch matching approach, and for the second task, the alternating direction method of multipliers is employed. Experiments show that our algorithm has superior advantages over existing inpainting techniques. Besides, our algorithm can be easily extended to handle practical applications including rendering acceleration, photo restoration and object removal.
  • Item
    Position‐Based Skinning for Soft Articulated Characters
    (Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd., 2015) Abu Rumman, Nadine; Fratarcangeli, Marco; Deussen, Oliver and Zhang, Hao (Richard)
    In this paper, we introduce a two‐layered approach addressing the problem of creating believable mesh‐based skin deformation. For each frame, the skin is first deformed with a classic linear blend skinning approach, which usually leads to unsightly artefacts such as the well‐known effect and volume loss. Then we enforce some geometric constraints which displace the positions of the vertices to mimic the behaviour of the skin and achieve effects like volume preservation and jiggling. We allow the artist to control the amount of jiggling and the area of the skin affected by it. The geometric constraints are solved using a position‐based dynamics (PBDs) schema. We employ a graph colouring algorithm for parallelizing the computation of the constraints. Being based on PBDs guarantees efficiency and real‐time performances while enduring robustness and unconditional stability. We demonstrate the visual quality and the performance of our approach with a variety of skeleton‐driven soft body characters.In this paper, we introduce a two‐layered approach addressing the problem of creating believable mesh‐based skin deformation. For each frame, the skin is first deformed with a classic linear blend skinning approach, which usually leads to unsightly artefacts such as the well‐known effect and volume loss. Then we enforce some geometric constraints which displace the positions of the vertices to mimic the behaviour of the skin and achieve effects like volume preservation and jiggling. We allow the artist to control the amount of jiggling and the area of the skin affected by it. The geometric constraints are solved using a position‐based dynamics (PBDs) schema. We employ a graph colouring algorithm for parallelizing the computation of the constraints. Being based on PBDs guarantees efficiency and real‐time performances while enduring robustness and unconditional stability.
  • Item
    Structure‐Aware Mesh Decimation
    (Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd., 2015) Salinas, D.; Lafarge, F.; Alliez, P.; Deussen, Oliver and Zhang, Hao (Richard)
    We present a novel approach for the decimation of triangle surface meshes. Our algorithm takes as input a triangle surface mesh and a set of planar proxies detected in a pre‐processing analysis step, and structured via an adjacency graph. It then performs greedy mesh decimation through a series of edge collapse, designed to approximate the local mesh geometry as well as the geometry and structure of proxies. Such structure‐preserving approach is well suited to planar abstraction, i.e. extreme decimation approximating well the planar parts while filtering out the others. Our experiments on a variety of inputs illustrate the potential of our approach in terms of improved accuracy and preservation of structure.We present a novel approach for the decimation of triangle surface meshes. Our algorithm takes as input a triangle surface mesh and a set of planar proxies detected in a pre‐processing analysis step, and structured via an adjacency graph. It then performs greedy mesh decimation through a series of edge collapse, designed to approximate the local mesh geometry as well as the geometry and structure of proxies. Such structure‐preserving approach is well suited to planar abstraction, i.e. extreme decimation approximating well the planar parts while filtering out the others. Our experiments on a variety of inputs illustrate the potential of our approach in terms of improved accuracy and preservation of structure.
  • Item
    Shading Curves: Vector-Based Drawing With Explicit Gradient Control
    (Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd., 2015) Lieng, Henrik; Tasse, Flora; Kosinka, Jiří; Dodgson, Neil A.; Deussen, Oliver and Zhang, Hao (Richard)
    A challenge in vector graphics is to define primitives that offer flexible manipulation of colour gradients. We propose a new primitive, called a shading curve, that supports explicit and local gradient control. This is achieved by associating shading profiles to each side of the curve. These shading profiles, which can be manually manipulated, represent the colour gradient out from their associated curves. Such explicit and local gradient control is challenging to achieve via the diffusion curve process, introduced in 2008, because it offers only implicit control of the colour gradient. We resolve this problem by using subdivision surfaces that are constructed from shading curves and their shading profiles.
  • Item
    Fast Rendering of Image Mosaics and ASCII Art
    (Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd., 2015) Markuš, Nenad; Fratarcangeli, Marco; Pandžić, Igor S.; Ahlberg, Jörgen; Deussen, Oliver and Zhang, Hao (Richard)
    An image mosaic is an assembly of a large number of small images, usually called , taken from a specific dictionary/codebook. When viewed as a whole, the appearance of a single large image emerges, i.e. each tile approximates a small block of pixels. ASCII art is a related (and older) graphic design technique for producing images from printable characters. Although automatic procedures for both of these visualization schemes have been studied in the past, some are computationally heavy and cannot offer real‐time and interactive performance. We propose an algorithm able to reproduce the quality of existing non‐photorealistic rendering techniques, in particular ASCII art and image mosaics, obtaining large performance speed‐ups. The basic idea is to partition the input image into a rectangular grid and use a decision tree to assign a tile from a pre‐determined codebook to each cell. Our implementation can process video streams from webcams in real time and it is suitable for modestly equipped devices. We evaluate our technique by generating the renderings of a variety of images and videos, with good results. The source code of our engine is publicly available.An image mosaic is an assembly of a large number of small images, usually called , taken from a specific dictionary/codebook. When viewed as a whole, the appearance of a single large image emerges, i.e. each tile approximates a small block of pixels. ASCII art is a related (and older) graphic design technique for producing images from printable characters. Although automatic procedures for both of these visualization schemes have been studied in the past, some are computationally heavy and cannot offer real‐time and interactive performance. We propose an algorithm able to reproduce the quality of existing non‐photorealistic rendering techniques, in particular ASCII art and image mosaics, obtaining large performance speed‐ups. The basic idea is to partition the input image into a rectangular grid and use a decision tree to assign a tile from a pre‐determined codebook to each cell. Our implementation can process video streams from webcams in real time and it is suitable for modestly equipped devices. We evaluate our technique by generating the renderings of a variety of images and videos, with good results. The source code of our engine is publicly available.
  • Item
    Emotion Analysis and Classification: Understanding the Performers' Emotions Using the LMA Entities
    (Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd., 2015) Aristidou, Andreas; Charalambous, Panayiotis; Chrysanthou, Yiorgos; Deussen, Oliver and Zhang, Hao (Richard)
    The increasing availability of large motion databases, in addition to advancements in motion synthesis, has made motion indexing and classification essential for better motion composition. However, in order to achieve good connectivity in motion graphs, it is important to understand human behaviour; human movement though is complex and difficult to completely describe. In this paper, we investigate the similarities between various emotional states with regards to the arousal and valence of the Russell's circumplex model. We use a variety of features that encode, in addition to the raw geometry, stylistic characteristics of motion based on Laban Movement Analysis (LMA). Motion capture data from acted dance performances were used for training and classification purposes. The experimental results show that the proposed features can partially extract the LMA components, providing a representative space for indexing and classification of dance movements with regards to the emotion. This work contributes to the understanding of human behaviour and actions, providing insights on how people express emotional states using their body, while the proposed features can be used as complement to the standard motion similarity, synthesis and classification methods.The increasing availability of large motion databases, in addition to advancements in motion synthesis, has made motion indexing and classification essential for better motion composition. However, in order to achieve good connectivity in motion graphs, it is important to understand human behaviour; human movement though is complex and difficult to completely describe. In this paper, we investigate the similarities between various emotional states with regards to the arousal and valence of the Russell's circumplex model. We use a variety of features that encode, in addition to the raw geometry, stylistic characteristics of motion based on Laban Movement Analysis (LMA).
  • Item
    AppFusion: Interactive Appearance Acquisition Using a Kinect Sensor
    (Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd., 2015) Wu, Hongzhi; Zhou, Kun; Deussen, Oliver and Zhang, Hao (Richard)
    We present an interactive material acquisition system for average users to capture the spatially varying appearance of daily objects. While an object is being scanned, our system estimates its appearance on‐the‐fly and provides quick visual feedback. We build the system entirely on low‐end, off‐the‐shelf components: a Kinect sensor, a mirror ball and printed markers. We exploit the Kinect infra‐red emitter/receiver, originally designed for depth computation, as an active hand‐held reflectometer, to segment the object into clusters of similar specular materials and estimate the roughness parameters of BRDFs simultaneously. Next, the diffuse albedo and specular intensity of the spatially varying materials are rapidly computed in an inverse rendering framework, using data from the Kinect RGB camera. We demonstrate captured results of a range of materials, and physically validate our system.We present an interactive material acquisition system for average users to capture the spatially varying appearance of daily objects. While an object is being scanned, our system estimates its appearance on‐the‐fly and provides quick visual feedback. We build the system entirely on low‐end, off‐the‐shelf components: a Kinect sensor, a mirror ball and printed markers. We exploit the Kinect infra‐red emitter/receiver, originally designed for depth computation, as an active hand‐held reflectometer, to segment the object into clusters of similar specular materials and estimate the roughness parameters of BRDFs simultaneously.
  • Item
    Convolution Filtering of Continuous Signed Distance Fields for Polygonal Meshes
    (Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd., 2015) Sanchez, Mathieu; Fryazinov, Oleg; Fayolle, Pierre‐Alain; Pasko, Alexander; Deussen, Oliver and Zhang, Hao (Richard)
    Signed distance fields obtained from polygonal meshes are commonly used in various applications. However, they can have discontinuities causing creases to appear when applying operations such as blending or metamorphosis. The focus of this work is to efficiently evaluate the signed distance function and to apply a smoothing filter to it while preserving the shape of the initial mesh. The resulting function is smooth almost everywhere, while preserving the exact shape of the polygonal mesh. Due to its low complexity, the proposed filtering technique remains fast compared to its main alternatives providing ‐continuous distance field approximation. Several applications are presented such as blending, metamorphosis and heterogeneous modelling with polygonal meshes.Signed distance fields obtained from polygonal meshes are commonly used in various applications. However, they can have discontinuities causing creases to appear when applying operations such as blending or metamorphosis. The focus of this work is to efficiently evaluate the signed distance function and to apply a smoothing filter to it while preserving the shape of the initial mesh. The resulting function is smooth almost everywhere, while preserving the exact shape of the polygonal mesh. Due to its low complexity, the proposed filtering technique remains fast compared to its main alternatives providing ‐continuous distance field approximation. Several applications are presented such as blending, metamorphosis and heterogeneous modelling with polygonal meshes.
  • Item
    A Review of Eye Gaze in Virtual Agents, Social Robotics and HCI: Behaviour Generation, User Interaction and Perception
    (Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd., 2015) Ruhland, K.; Peters, C. E.; Andrist, S.; Badler, J. B.; Badler, N. I.; Gleicher, M.; Mutlu, B.; McDonnell, R.; Deussen, Oliver and Zhang, Hao (Richard)
    A person's emotions and state of mind are apparent in their face and eyes. As a Latin proverb states: ‘The face is the portrait of the mind; the eyes, its informers’. This presents a significant challenge for Computer Graphics researchers who generate artificial entities that aim to replicate the movement and appearance of the human eye, which is so important in human–human interactions. This review article provides an overview of the efforts made on tackling this demanding task. As with many topics in computer graphics, a cross‐disciplinary approach is required to fully understand the workings of the eye in the transmission of information to the user. We begin with a discussion of the movement of the eyeballs, eyelids and the head from a physiological perspective and how these movements can be modelled, rendered and animated in computer graphics applications. Furthermore, we present recent research from psychology and sociology that seeks to understand higher level behaviours, such as attention and eye gaze, during the expression of emotion or during conversation. We discuss how these findings are synthesized in computer graphics and can be utilized in the domains of Human–Robot Interaction and Human–Computer Interaction for allowing humans to interact with virtual agents and other artificial entities. We conclude with a summary of guidelines for animating the eye and head from the perspective of a character animator.A person's emotions and state of mind are apparent in their face and eyes. As a Latin proverb states: ‘The face is the portrait of the mind; the eyes, its informers’. This presents a significant challenge for Computer Graphics researchers who generate artificial entities that aim to replicate the movement and appearance of the human eye, which is so important in human–human interactions. This review article provides an overview of the efforts made on tackling this demanding task. As with many topics in computer graphics, a cross‐disciplinary approach is required to fully understand the workings of the eye in the transmission of information to the user.
  • Item
    Erratum
    (Copyright © 2015 The Eurographics Association and John Wiley & Sons Ltd., 2015) Deussen, Oliver and Zhang, Hao (Richard)