36-Issue 2

Permanent URI for this collection

Art, Design, and Sketching
Computational Light Painting Using a Virtual Exposure
Nestor Z. Salamon, Marcel Lancelle, and Elmar Eisemann
Monte Carlo
Unbiased Light Transport Estimators for Inhomogeneous Participating Media
László Szirmay-Kalos, Iliyan Georgiev, Milán Magdics, Balázs Molnár, and Dávid Légrády
Multiple Vertex Next Event Estimation for Lighting in dense, forward-scattering Media
Pascal Weber, Johannes Hanika, and Carsten Dachsbacher
Gradient-Domain Photon Density Estimation
Binh-Son Hua, Adrien Gruson, Derek Nowrouzezahrai, and Toshiya Hachisuka
Procedural and Interactive Nature
Design Transformations for Rule-based Procedural Modeling
Stefan Lienhard, Cheryl Lau, Pascal MĂĽller, Peter Wonka, and Mark Pauly
Interactive Modeling and Authoring of Climbing Plants
Torsten Hädrich, Bedrich Benes, Oliver Deussen, and Sören Pirk
EcoBrush: Interactive Control of Visually Consistent Large-Scale Ecosystems
James Gain, Harry Long, Guillaume Cordonnier, and Marie-Paule Cani
Rigging, Tearing, and Faces
Enriching Facial Blendshape Rigs with Physical Simulation
Yeara Kozlov, Derek Bradley, Moritz Bächer, Bernhard Thomaszewski, Thabo Beeler, and Markus Gross
Sparse Rig Parameter Optimization for Character Animation
Jaewon Song, Roger Blanco i Ribera, Kyungmin Cho, Mi You, J. P. Lewis, Byungkuk Choi, and Junyong Noh
Interactive Paper Tearing
Camille Schreck, Damien Rohmer, and Stefanie Hahmann
Sample, Paint, and Visualize
General Point Sampling with Adaptive Density and Correlations
Riccardo Roveri, A. Cengiz Ă–ztireli, and Markus Gross
Morphing and Interaction
Character-Object Interaction Retrieval Using the Interaction Bisector Surface
Xi Zhao, Myung Geol Choi, and Taku Komura
kDet: Parallel Constant Time Collision Detection for Polygonal Objects
René Weller, Nicole Debowski, and Gabriel Zachmann
Flowing Visualization
Flow-Induced Inertial Steady Vector Field Topology
Tobias GĂĽnther and Markus Gross
Decoupled Opacity Optimization for Points, Lines and Surfaces
Tobias GĂĽnther, Holger Theisel, and Markus Gross
Geometry Processing
Diffusion Diagrams: Voronoi Cells and Centroids from Diffusion
Philipp Herholz, Felix Haase, and Marc Alexa
Textures
Texture Stationarization: Turning Photos into Tileable Textures
Joep Moritz, Stuart James, Tom S. F. Haines, Tobias Ritschel, and Tim Weyrich
A Subjective Evaluation of Texture Synthesis Methods
Martin Kolár, Kurt Debattista, and Alan Chalmers
Analysis and Controlled Synthesis of Inhomogeneous Textures
Yang Zhou, Huajie Shi, Dani Lischinski, Minglun Gong, Johannes Kopf, and Hui Huang
Procedural
ShapeGenetics: Using Genetic Algorithms for Procedural Modeling
Karl Haubenwallner, Hans-Peter Seidel, and Markus Steinberger
On Realism of Architectural Procedural Models
Jan Beneš, Tom Kelly, Filip Děchtěrenko, Jaroslav Křivánek, and Pascal Müller
Animation 1
Geometric Stiffness for Real-time Constrained Multibody Dynamics
Sheldon Andrews, Marek Teichmann, and Paul G. Kry
Shape Matching
Fully Spectral Partial Shape Matching
Or Litany, Emanuele RodolĂ , Alex M. Bronstein, and Michael M. Bronstein
Informative Descriptor Preservation via Commutativity for Shape Matching
Dorian Nogneng and Maks Ovsjanikov
Physics in Animation
DeepGarment: 3D Garment Shape Estimation from a Single Image
Radek Danecek, Endri Dibra, Cengiz A. Ă–ztireli, Remo Ziegler, and Markus Gross
Simulation-Ready Hair Capture
Liwen Hu, Derek Bradley, Hao Li, and Thabo Beeler
Capturing Faces
Multi-View Stereo on Consistent Face Topology
Graham Fyffe, Koki Nagano, Loc Huynh, Shunsuke Saito, Jay Busch, Andrew Jones, Hao Li, and Paul Debevec
Makeup Lamps: Live Augmentation of Human Faces via Projection
Amit Haim Bermano, Markus Billeter, Daisuke Iwai, and Anselm Grundhöfer
Real-Time Multi-View Facial Capture with Synthetic Training
Martin Klaudiny, Steven McDonagh, Derek Bradley, Thabo Beeler, and Kenny Mitchell
Animation 2
Gradient-based Steering for Vision-based Crowd Simulation Algorithms
Teofilo B. Dutra, Ricardo Marques, Joaquim Bento Cavalcante-Neto, Creto A. Vidal, and Julien Pettré
Sparse Inertial Poser: Automatic 3D Human Pose Estimation from Sparse IMUs
Timo von Marcard, Bodo Rosenhahn, Michael J. Black, and Gerard Pons-Moll
Reconstruct, Learn, and Transport Geometry
Learning Detail Transfer based on Geometric Features
Sema Berkiten, Maciej Halber, Justin Solomon, Chongyang Ma, Hao Li, and Szymon Rusinkiewicz
Chamber Recognition in Cave Data Sets
Nico Schertler, Manfred Buchroithner, and Stefan Gumhold
Camera: Depth to Motion, Lens and Filters
Performance-Based Biped Control using a Consumer Depth Camera
Yoonsang Lee and Taesoo Kwon
Consistent Video Filtering for Camera Arrays
Nicolas Bonneel, James Tompkin, Deqing Sun, Oliver Wang, Kalyan Sunkavalli, Sylvain Paris, and Hanspeter Pfister
Apparent Materials
Practical Capture and Reproduction of Phosphorescent Appearance
Oliver Nalbach, Hans-Peter Seidel, and Tobias Ritschel
STD: Student's t-Distribution of Slopes for Microfacet Based BSDFs
Mickael Ribardière, Benjamin Bringier, Daniel Meneveaux, and Lionel Simonot
Hybrid Mesh-volume LoDs for All-scale Pre-filtering of Complex 3D Assets
Guillaume Loubet and Fabrice Neyret
Spatial Adjacency Maps for Translucency Simulation under General Illumination
Sebastian Maisch and Timo Ropinski
Focus and Virtual Environments
Zooming on all Actors: Automatic Focus+Context Split Screen Video Generation
Moneish Kumar, Vineet Gandhi, RĂ©mi Ronfard, and Michael Gleicher
Flicker Observer Effect: Guiding Attention Through High Frequency Flicker in Images
Nicholas Waldin, Manuela Waldner, and Ivan Viola
GPU and Data Structures
GPU Ray Tracing using Irregular Grids
Arsène Pérard-Gayot, Javor Kalojanov, and Philipp Slusallek
Parallel BVH Construction using Progressive Hierarchical Refinement
Jakub Hendrich, Daniel Meister, and Jiří Bittner
A GPU-Adapted Structure for Unstructured Grids
Rhaleb Zayer, Markus Steinberger, and Hans-Peter Seidel

BibTeX (36-Issue 2)
                
@article{
10.1111:cgf.13101,
journal = {Computer Graphics Forum}, title = {{
Computational Light Painting Using a Virtual Exposure}},
author = {
Salamon, Nestor Z.
and
Lancelle, Marcel
and
Eisemann, Elmar
}, year = {
2017},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13101}
}
                
@article{
10.1111:cgf.13102,
journal = {Computer Graphics Forum}, title = {{
Unbiased Light Transport Estimators for Inhomogeneous Participating Media}},
author = {
Szirmay-Kalos, László
and
Georgiev, Iliyan
and
Magdics, Milán
and
Molnár, Balázs
and
Légrády, Dávid
}, year = {
2017},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13102}
}
                
@article{
10.1111:cgf.13103,
journal = {Computer Graphics Forum}, title = {{
Multiple Vertex Next Event Estimation for Lighting in dense, forward-scattering Media}},
author = {
Weber, Pascal
and
Hanika, Johannes
and
Dachsbacher, Carsten
}, year = {
2017},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13103}
}
                
@article{
10.1111:cgf.13104,
journal = {Computer Graphics Forum}, title = {{
Gradient-Domain Photon Density Estimation}},
author = {
Hua, Binh-Son
and
Gruson, Adrien
and
Nowrouzezahrai, Derek
and
Hachisuka, Toshiya
}, year = {
2017},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13104}
}
                
@article{
10.1111:cgf.13105,
journal = {Computer Graphics Forum}, title = {{
Design Transformations for Rule-based Procedural Modeling}},
author = {
Lienhard, Stefan
and
Lau, Cheryl
and
MĂĽller, Pascal
and
Wonka, Peter
and
Pauly, Mark
}, year = {
2017},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13105}
}
                
@article{
10.1111:cgf.13106,
journal = {Computer Graphics Forum}, title = {{
Interactive Modeling and Authoring of Climbing Plants}},
author = {
Hädrich, Torsten
and
Benes, Bedrich
and
Deussen, Oliver
and
Pirk, Sören
}, year = {
2017},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13106}
}
                
@article{
10.1111:cgf.13107,
journal = {Computer Graphics Forum}, title = {{
EcoBrush: Interactive Control of Visually Consistent Large-Scale Ecosystems}},
author = {
Gain, James
and
Long, Harry
and
Cordonnier, Guillaume
and
Cani, Marie-Paule
}, year = {
2017},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13107}
}
                
@article{
10.1111:cgf.13108,
journal = {Computer Graphics Forum}, title = {{
Enriching Facial Blendshape Rigs with Physical Simulation}},
author = {
Kozlov, Yeara
and
Bradley, Derek
and
Bächer, Moritz
and
Thomaszewski, Bernhard
and
Beeler, Thabo
and
Gross, Markus
}, year = {
2017},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13108}
}
                
@article{
10.1111:cgf.13109,
journal = {Computer Graphics Forum}, title = {{
Sparse Rig Parameter Optimization for Character Animation}},
author = {
Song, Jaewon
and
Ribera, Roger Blanco i
and
Cho, Kyungmin
and
You, Mi
and
Lewis, J. P.
and
Choi, Byungkuk
and
Noh, Junyong
}, year = {
2017},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13109}
}
                
@article{
10.1111:cgf.13110,
journal = {Computer Graphics Forum}, title = {{
Interactive Paper Tearing}},
author = {
Schreck, Camille
and
Rohmer, Damien
and
Hahmann, Stefanie
}, year = {
2017},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13110}
}
                
@article{
10.1111:cgf.13111,
journal = {Computer Graphics Forum}, title = {{
General Point Sampling with Adaptive Density and Correlations}},
author = {
Roveri, Riccardo
and
Ă–ztireli, A. Cengiz
and
Gross, Markus
}, year = {
2017},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13111}
}
                
@article{
10.1111:cgf.13112,
journal = {Computer Graphics Forum}, title = {{
Character-Object Interaction Retrieval Using the Interaction Bisector Surface}},
author = {
Zhao, Xi
and
Choi, Myung Geol
and
Komura, Taku
}, year = {
2017},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13112}
}
                
@article{
10.1111:cgf.13113,
journal = {Computer Graphics Forum}, title = {{
kDet: Parallel Constant Time Collision Detection for Polygonal Objects}},
author = {
Weller, René
and
Debowski, Nicole
and
Zachmann, Gabriel
}, year = {
2017},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13113}
}
                
@article{
10.1111:cgf.13114,
journal = {Computer Graphics Forum}, title = {{
Flow-Induced Inertial Steady Vector Field Topology}},
author = {
GĂĽnther, Tobias
and
Gross, Markus
}, year = {
2017},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13114}
}
                
@article{
10.1111:cgf.13115,
journal = {Computer Graphics Forum}, title = {{
Decoupled Opacity Optimization for Points, Lines and Surfaces}},
author = {
GĂĽnther, Tobias
and
Theisel, Holger
and
Gross, Markus
}, year = {
2017},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13115}
}
                
@article{
10.1111:cgf.13116,
journal = {Computer Graphics Forum}, title = {{
Diffusion Diagrams: Voronoi Cells and Centroids from Diffusion}},
author = {
Herholz, Philipp
and
Haase, Felix
and
Alexa, Marc
}, year = {
2017},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13116}
}
                
@article{
10.1111:cgf.13117,
journal = {Computer Graphics Forum}, title = {{
Texture Stationarization: Turning Photos into Tileable Textures}},
author = {
Moritz, Joep
and
James, Stuart
and
Haines, Tom S. F.
and
Ritschel, Tobias
and
Weyrich, Tim
}, year = {
2017},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13117}
}
                
@article{
10.1111:cgf.13118,
journal = {Computer Graphics Forum}, title = {{
A Subjective Evaluation of Texture Synthesis Methods}},
author = {
Kolár, Martin
and
Debattista, Kurt
and
Chalmers, Alan
}, year = {
2017},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13118}
}
                
@article{
10.1111:cgf.13119,
journal = {Computer Graphics Forum}, title = {{
Analysis and Controlled Synthesis of Inhomogeneous Textures}},
author = {
Zhou, Yang
and
Shi, Huajie
and
Lischinski, Dani
and
Gong, Minglun
and
Kopf, Johannes
and
Huang, Hui
}, year = {
2017},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13119}
}
                
@article{
10.1111:cgf.13120,
journal = {Computer Graphics Forum}, title = {{
ShapeGenetics: Using Genetic Algorithms for Procedural Modeling}},
author = {
Haubenwallner, Karl
and
Seidel, Hans-Peter
and
Steinberger, Markus
}, year = {
2017},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13120}
}
                
@article{
10.1111:cgf.13121,
journal = {Computer Graphics Forum}, title = {{
On Realism of Architectural Procedural Models}},
author = {
Beneš, Jan
and
Kelly, Tom
and
Děchtěrenko, Filip
and
Křivánek, Jaroslav
and
MĂĽller, Pascal
}, year = {
2017},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13121}
}
                
@article{
10.1111:cgf.13122,
journal = {Computer Graphics Forum}, title = {{
Geometric Stiffness for Real-time Constrained Multibody Dynamics}},
author = {
Andrews, Sheldon
and
Teichmann, Marek
and
Kry, Paul G.
}, year = {
2017},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13122}
}
                
@article{
10.1111:cgf.13123,
journal = {Computer Graphics Forum}, title = {{
Fully Spectral Partial Shape Matching}},
author = {
Litany, Or
and
RodolĂ , Emanuele
and
Bronstein, Alex M.
and
Bronstein, Michael M.
}, year = {
2017},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13123}
}
                
@article{
10.1111:cgf.13124,
journal = {Computer Graphics Forum}, title = {{
Informative Descriptor Preservation via Commutativity for Shape Matching}},
author = {
Nogneng, Dorian
and
Ovsjanikov, Maks
}, year = {
2017},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13124}
}
                
@article{
10.1111:cgf.13125,
journal = {Computer Graphics Forum}, title = {{
DeepGarment: 3D Garment Shape Estimation from a Single Image}},
author = {
Danerek, Radek
and
Dibra, Endri
and
Ă–ztireli, A. Cengiz
and
Ziegler, Remo
and
Gross, Markus
}, year = {
2017},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13125}
}
                
@article{
10.1111:cgf.13126,
journal = {Computer Graphics Forum}, title = {{
Simulation-Ready Hair Capture}},
author = {
Hu, Liwen
and
Bradley, Derek
and
Li, Hao
and
Beeler, Thabo
}, year = {
2017},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13126}
}
                
@article{
10.1111:cgf.13127,
journal = {Computer Graphics Forum}, title = {{
Multi-View Stereo on Consistent Face Topology}},
author = {
Fyffe, Graham
and
Nagano, Koki
and
Huynh, Loc
and
Saito, Shunsuke
and
Busch, Jay
and
Jones, Andrew
and
Li, Hao
and
Debevec, Paul
}, year = {
2017},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13127}
}
                
@article{
10.1111:cgf.13128,
journal = {Computer Graphics Forum}, title = {{
Makeup Lamps: Live Augmentation of Human Faces via Projection}},
author = {
Bermano, Amit Haim
and
Billeter, Markus
and
Iwai, Daisuke
and
Grundhöfer, Anselm
}, year = {
2017},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13128}
}
                
@article{
10.1111:cgf.13129,
journal = {Computer Graphics Forum}, title = {{
Real-Time Multi-View Facial Capture with Synthetic Training}},
author = {
Klaudiny, Martin
and
McDonagh, Steven
and
Bradley, Derek
and
Beeler, Thabo
and
Mitchell, Kenny
}, year = {
2017},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13129}
}
                
@article{
10.1111:cgf.13130,
journal = {Computer Graphics Forum}, title = {{
Gradient-based Steering for Vision-based Crowd Simulation Algorithms}},
author = {
Dutra, Teofilo B.
and
Marques, Ricardo
and
Cavalcante-Neto, Joaquim Bento
and
Vidal, Creto A.
and
Pettré, Julien
}, year = {
2017},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13130}
}
                
@article{
10.1111:cgf.13131,
journal = {Computer Graphics Forum}, title = {{
Sparse Inertial Poser: Automatic 3D Human Pose Estimation from Sparse IMUs}},
author = {
Marcard, Timo von
and
Rosenhahn, Bodo
and
Black, Michael J.
and
Pons-Moll, Gerard
}, year = {
2017},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13131}
}
                
@article{
10.1111:cgf.13132,
journal = {Computer Graphics Forum}, title = {{
Learning Detail Transfer based on Geometric Features}},
author = {
Berkiten, Sema
and
Halber, Maciej
and
Solomon, Justin
and
Ma, Chongyang
and
Li, Hao
and
Rusinkiewicz, Szymon
}, year = {
2017},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13132}
}
                
@article{
10.1111:cgf.13133,
journal = {Computer Graphics Forum}, title = {{
Chamber Recognition in Cave Data Sets}},
author = {
Schertler, Nico
and
Buchroithner, Manfred
and
Gumhold, Stefan
}, year = {
2017},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13133}
}
                
@article{
10.1111:cgf.13134,
journal = {Computer Graphics Forum}, title = {{
Performance-Based Biped Control using a Consumer Depth Camera}},
author = {
Lee, Yoonsang
and
Kwon, Taesoo
}, year = {
2017},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13134}
}
                
@article{
10.1111:cgf.13135,
journal = {Computer Graphics Forum}, title = {{
Consistent Video Filtering for Camera Arrays}},
author = {
Bonneel, Nicolas
and
Tompkin, James
and
Sun, Deqing
and
Wang, Oliver
and
Sunkavalli, Kalyan
and
Paris, Sylvain
and
Pfister, Hanspeter
}, year = {
2017},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13135}
}
                
@article{
10.1111:cgf.13136,
journal = {Computer Graphics Forum}, title = {{
Practical Capture and Reproduction of Phosphorescent Appearance}},
author = {
Nalbach, Oliver
and
Seidel, Hans-Peter
and
Ritschel, Tobias
}, year = {
2017},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13136}
}
                
@article{
10.1111:cgf.13137,
journal = {Computer Graphics Forum}, title = {{
STD: Student's t-Distribution of Slopes for Microfacet Based BSDFs}},
author = {
Ribardière, Mickael
and
Bringier, Benjamin
and
Meneveaux, Daniel
and
Simonot, Lionel
}, year = {
2017},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13137}
}
                
@article{
10.1111:cgf.13138,
journal = {Computer Graphics Forum}, title = {{
Hybrid Mesh-volume LoDs for All-scale Pre-filtering of Complex 3D Assets}},
author = {
Loubet, Guillaume
and
Neyret, Fabrice
}, year = {
2017},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13138}
}
                
@article{
10.1111:cgf.13139,
journal = {Computer Graphics Forum}, title = {{
Spatial Adjacency Maps for Translucency Simulation under General Illumination}},
author = {
Maisch, Sebastian
and
Ropinski, Timo
}, year = {
2017},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13139}
}
                
@article{
10.1111:cgf.13140,
journal = {Computer Graphics Forum}, title = {{
Zooming on all Actors: Automatic Focus+Context Split Screen Video Generation}},
author = {
Kumar, Moneish
and
Gandhi, Vineet
and
Ronfard, RĂ©mi
and
Gleicher, Michael
}, year = {
2017},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13140}
}
                
@article{
10.1111:cgf.13141,
journal = {Computer Graphics Forum}, title = {{
Flicker Observer Effect: Guiding Attention Through High Frequency Flicker in Images}},
author = {
Waldin, Nicholas
and
Waldner, Manuela
and
Viola, Ivan
}, year = {
2017},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13141}
}
                
@article{
10.1111:cgf.13142,
journal = {Computer Graphics Forum}, title = {{
GPU Ray Tracing using Irregular Grids}},
author = {
Pérard-Gayot, Arsène
and
Kalojanov, Javor
and
Slusallek, Philipp
}, year = {
2017},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13142}
}
                
@article{
10.1111:cgf.13143,
journal = {Computer Graphics Forum}, title = {{
Parallel BVH Construction using Progressive Hierarchical Refinement}},
author = {
Hendrich, Jakub
and
Meister, Daniel
and
Bittner, Jiří
}, year = {
2017},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13143}
}
                
@article{
10.1111:cgf.13144,
journal = {Computer Graphics Forum}, title = {{
A GPU-Adapted Structure for Unstructured Grids}},
author = {
Zayer, Rhaleb
and
Steinberger, Markus
and
Seidel, Hans-Peter
}, year = {
2017},
publisher = {
The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {
10.1111/cgf.13144}
}

Browse

Recent Submissions

Now showing 1 - 45 of 45
  • Item
    EUROGRAPHICS 2017: CGF 36-2 Frontmatter
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Barthe, LoĂŻc; Benes, Bedrich;
  • Item
    Computational Light Painting Using a Virtual Exposure
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Salamon, Nestor Z.; Lancelle, Marcel; Eisemann, Elmar; Loic Barthe and Bedrich Benes
    Light painting is an artform where a light source is moved during a long-exposure shot, creating trails resembling a stroke on a canvas. It is very difficult to perform because the light source needs to be moved at the intended speed and along a precise trajectory. Additionally, images can be corrupted by the person moving the light. We propose computational light painting, which avoids such artifacts and is easy to use. Taking a video of the moving light as input, a virtual exposure allows us to draw the intended light positions in a post-process. We support animation, as well as 3D light sculpting, with high-quality results.
  • Item
    Unbiased Light Transport Estimators for Inhomogeneous Participating Media
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Szirmay-Kalos, László; Georgiev, Iliyan; Magdics, Milán; Molnár, Balázs; Légrády, Dávid; Loic Barthe and Bedrich Benes
    This paper presents a new stochastic particle model for efficient and unbiased Monte Carlo rendering of heterogeneous participating media. We randomly add and remove material particles to obtain a density with which free flight sampling and transmittance estimation are simple, while material particle properties are simultaneously modified to maintain the true expectation of the radiance. We show that meeting this requirement may need the introduction of light particles with negative energy and materials with negative extinction, and provide an intuitive interpretation for such phenomena. Unlike previous unbiased methods, the proposed approach does not require a-priori knowledge of the maximum medium density that is typically difficult to obtain for procedural models. However, the method can benefit from an approximate knowledge of the density, which can usually be acquired on-the-fly at little extra cost and can greatly reduce the variance of the proposed estimators. The introduced mechanism can be integrated in participating media renderers where transmittance estimation and free flight sampling are building blocks. We demonstrate its application in a multiple scattering particle tracer, in transmittance computation, and in the estimation of the inhomogeneous air-light integral.
  • Item
    Multiple Vertex Next Event Estimation for Lighting in dense, forward-scattering Media
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Weber, Pascal; Hanika, Johannes; Dachsbacher, Carsten; Loic Barthe and Bedrich Benes
    We present a new technique called Multiple Vertex Next Event Estimation, which outperforms current direct lighting techniques in forward scattering, optically dense media with the Henyey-Greenstein phase function. Instead of a one-segment connection from a vertex within the medium to the light source, an entire sub path of arbitrary length can be created and we show experimentally that 4-10 segments work best in practice. This is done by perturbing a seed path within the Monte Carlo context. Our technique was integrated in a Monte Carlo renderer, combining random walk path tracing with multiple vertex next event estimation via multiple importance sampling for an unbiased result. We evaluate this new technique against standard next event estimation and show that it significantly reduces noise and increases performance of multiple scattering renderings in highly anisotropic, optically dense media. Additionally, we discuss multiple light sources and performance implications of memoryheavy heterogeneous media.
  • Item
    Gradient-Domain Photon Density Estimation
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Hua, Binh-Son; Gruson, Adrien; Nowrouzezahrai, Derek; Hachisuka, Toshiya; Loic Barthe and Bedrich Benes
    The most common solutions to the light transport problem rely on either Monte Carlo (MC) integration or density estimation methods, such as uni- & bi-directional path tracing or photon mapping. Recent gradient-domain extensions of MC approaches show great promise; here, gradients of the final image are estimated numerically (instead of the image intensities themselves) with coherent paths generated from a deterministic shift mapping. We extend gradient-domain approaches to light transport simulation based on density estimation. As with previous gradient-domain methods, we detail important considerations that arise when moving from a primal- to gradient-domain estimator. We provide an efficient and straightforward solution to these problems. Our solution supports stochastic progressive density estimation, so it is robust to complex transport effects. We show that gradient-domain photon density estimation converges faster than its primal-domain counterpart, as well as being generally more robust than gradient-domain uni- & bi-directional path tracing for scenes dominated by complex transport.
  • Item
    Design Transformations for Rule-based Procedural Modeling
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Lienhard, Stefan; Lau, Cheryl; MĂĽller, Pascal; Wonka, Peter; Pauly, Mark; Loic Barthe and Bedrich Benes
    We introduce design transformations for rule-based procedural models, e.g., for buildings and plants. Given two or more procedural designs, each specified by a grammar, a design transformation combines elements of the existing designs to generate new designs. We introduce two technical components to enable design transformations. First, we extend the concept of discrete rule switching to rule merging, leading to a very large shape space for combining procedural models. Second, we propose an algorithm to jointly derive two or more grammars, called grammar co-derivation. We demonstrate two applications of our work: we show that our framework leads to a larger variety of models than previous work, and we show fine-grained transformation sequences between two procedural models.
  • Item
    Interactive Modeling and Authoring of Climbing Plants
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Hädrich, Torsten; Benes, Bedrich; Deussen, Oliver; Pirk, Sören; Loic Barthe and Bedrich Benes
    We present a novel system for the interactive modeling of developmental climbing plants with an emphasis on efficient control and plausible physics response. A plant is represented by a set of connected anisotropic particles that respond to the surrounding environment and to their inner state. Each particle stores biological and physical attributes that drive growth and plant adaptation to the environment such as light sensitivity, wind interaction, and physical obstacles. This representation allows for the efficient modeling of external effects that can be induced at any time without prior analysis of the plant structure. In our framework we exploit this representation to provide powerful editing capabilities that allow to edit a plant with respect to its structure and its environment while maintaining a biologically plausible appearance. Moreover, we couple plants with Lagrangian fluid dynamics and model advanced effects, such as the breaking and bending of branches. The user can thus interactively drag and prune branches or seed new plants in dynamically changing environments. Our system runs in real-time and supports up to 20 plant instances with 25k branches in parallel. The effectiveness of our approach is demonstrated through a number of interactive experiments, including modeling and animation of different species of climbing plants on complex support structures.
  • Item
    EcoBrush: Interactive Control of Visually Consistent Large-Scale Ecosystems
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Gain, James; Long, Harry; Cordonnier, Guillaume; Cani, Marie-Paule; Loic Barthe and Bedrich Benes
    One challenge in portraying large-scale natural scenes in virtual environments is specifying the attributes of plants, such as species, size and placement, in a way that respects the features of natural ecosystems, while remaining computationally tractable and allowing user design. To address this, we combine ecosystem simulation with a distribution analysis of the resulting plant attributes to create biome-specific databases, indexed by terrain conditions, such as temperature, rainfall, sunlight and slope. For a specific terrain, interpolated entries are drawn from this database and used to interactively synthesize a full ecosystem, while retaining the fidelity of the original simulations. A painting interface supplies users with semantic brushes for locally adjusting ecosystem age, plant density and variability, as well as optionally picking from a palette of precomputed distributions. Since these brushes are keyed to the underlying terrain properties a balance between user control and real-world consistency is maintained. Our system can be be used to interactively design ecosystems up to 5x5 km2 in extent, or to automatically generate even larger ecosystems in a fraction of the time of a full simulation, while demonstrating known properties from plant ecology such as succession, self-thinning, and underbrush, across a variety of biomes.
  • Item
    Enriching Facial Blendshape Rigs with Physical Simulation
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Kozlov, Yeara; Bradley, Derek; Bächer, Moritz; Thomaszewski, Bernhard; Beeler, Thabo; Gross, Markus; Loic Barthe and Bedrich Benes
    Oftentimes facial animation is created separately from overall body motion. Since convincing facial animation is challenging enough in itself, artists tend to create and edit the face motion in isolation. Or if the face animation is derived from motion capture, this is typically performed in a mo-cap booth while sitting relatively still. In either case, recombining the isolated face animation with body and head motion is non-trivial and often results in an uncanny result if the body dynamics are not properly reflected on the face (e.g. the bouncing of facial tissue when running). We tackle this problem by introducing a simple and intuitive system that allows to add physics to facial blendshape animation. Unlike previous methods that try to add physics to face rigs, our method preserves the original facial animation as closely as possible. To this end, we present a novel simulation framework that uses the original animation as per-frame rest-poses without adding spurious forces. As a result, in the absence of any external forces or rigid head motion, the facial performance will exactly match the artist-created blendshape animation. In addition we propose the concept of blendmaterials to give artists an intuitive means to account for changing material properties due to muscle activation. This system allows to automatically combine facial animation and head motion such that they are consistent, while preserving the original animation as closely as possible. The system is easy to use and readily integrates with existing animation pipelines.
  • Item
    Sparse Rig Parameter Optimization for Character Animation
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Song, Jaewon; Ribera, Roger Blanco i; Cho, Kyungmin; You, Mi; Lewis, J. P.; Choi, Byungkuk; Noh, Junyong; Loic Barthe and Bedrich Benes
    We propose a novel motion retargeting method that efficiently estimates artist-friendly rig space parameters. Inspired by the workflow typically observed in keyframe animation, our approach transfers a source motion into a production friendly character rig by optimizing the rig space parameters while balancing the considerations of fidelity to the source motion and the ease of subsequent editing. We propose the use of an intermediate object to transfer both the skeletal motion and the mesh deformation. The target rig-space parameters are then optimized to minimize the error between the motion of an intermediate object and the target character. The optimization uses a set of artist defined weights to modulate the effect of the different rig space parameters over time. Sparsity inducing regularizers and keyframe extraction streamline any additional editing processes. The results obtained with different types of character rigs demonstrate the versatility of our method and its effectiveness in simplifying any necessary manual editing within the production pipeline.
  • Item
    Interactive Paper Tearing
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Schreck, Camille; Rohmer, Damien; Hahmann, Stefanie; Loic Barthe and Bedrich Benes
    We propose an efficient method to model paper tearing in the context of interactive modeling. The method uses geometrical information to automatically detect potential starting points of tears. We further introduce a new hybrid geometrical and physical-based method to compute the trajectory of tears while procedurally synthesizing high resolution details of the tearing path using a texture based approach. The results obtained are compared with real paper and with previous studies on the expected geometric paths of paper that tears.
  • Item
    General Point Sampling with Adaptive Density and Correlations
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Roveri, Riccardo; Ă–ztireli, A. Cengiz; Gross, Markus; Loic Barthe and Bedrich Benes
    Analyzing and generating sampling patterns are fundamental problems for many applications in computer graphics. Ideally, point patterns should conform to the problem at hand with spatially adaptive density and correlations. Although there exist excellent algorithms that can generate point distributions with spatially adaptive density or anisotropy, the pair-wise correlation model, blue noise being the most common, is assumed to be constant throughout the space. Analogously, by relying on possibly modulated pair-wise difference vectors, the analysis methods are designed to study only such spatially constant correlations. In this paper, we present the first techniques to analyze and synthesize point patterns with adaptive density and correlations. This provides a comprehensive framework for understanding and utilizing general point sampling. Starting from fundamental measures from stochastic point processes, we propose an analysis framework for general distributions, and a novel synthesis algorithm that can generate point distributions with spatio-temporally adaptive density and correlations based on a locally stationary point process model. Our techniques also extend to general metric spaces. We illustrate the utility of the new techniques on the analysis and synthesis of real-world distributions, image reconstruction, spatio-temporal stippling, and geometry sampling.
  • Item
    Character-Object Interaction Retrieval Using the Interaction Bisector Surface
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Zhao, Xi; Choi, Myung Geol; Komura, Taku; Loic Barthe and Bedrich Benes
    In this paper, we propose a novel approach for the classification and retrieval of interactions between human characters and objects. We propose to use the interaction bisector surface (IBS) between the body and the object as a feature of the interaction. We define a multi-resolution representation of the body structure, and compute a correspondence matrix hierarchy that describes which parts of the character's skeleton take part in the composition of the IBS and how much they contribute to the interaction. Key-frames of the interactions are extracted based on the evolution of the IBS and used to align the query interaction with the interaction in the database. Through the experimental results, we show that our approach outperforms existing techniques in motion classification and retrieval, which implies that the contextual information plays a significant role for scene and interaction description. Our method also shows better performance than other techniques that use features based on the spatial relations between the body parts, or the body parts and the object. Our method can be applied for character motion synthesis and robot motion planning.
  • Item
    kDet: Parallel Constant Time Collision Detection for Polygonal Objects
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Weller, René; Debowski, Nicole; Zachmann, Gabriel; Loic Barthe and Bedrich Benes
    We define a novel geometric predicate and a class of objects that enables us to prove a linear bound on the number of intersecting polygon pairs for colliding 3D objects in that class. Our predicate is relevant both in theory and in practice: it is easy to check and it needs to consider only the geometric properties of the individual objects - it does not depend on the configuration of a given pair of objects. In addition, it characterizes a practically relevant class of objects: we checked our predicate on a large database of real-world 3D objects and the results show that it holds for all but the most pathological ones. Our proof is constructive in that it is the basis for a novel collision detection algorithm that realizes this linear complexity also in practice. Additionally, we present a parallelization of this algorithm with a worst-case running time that is independent of the number of polygons. Our algorithm is very well suited not only for rigid but also for deformable and even topology-changing objects, because it does not require any complex data structures or pre-processing. We have implemented our algorithm on the GPU and the results show that it is able to find in real-time all colliding polygons for pairs of deformable objects consisting of more than 200k triangles, including self-collisions.
  • Item
    Flow-Induced Inertial Steady Vector Field Topology
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) GĂĽnther, Tobias; Gross, Markus; Loic Barthe and Bedrich Benes
    Traditionally, vector field visualization is concerned with 2D and 3D flows. Yet, many concepts can be extended to general dynamical systems, including the higher-dimensional problem of modeling the motion of finite-sized objects in fluids. In the steady case, the trajectories of these so-called inertial particles appear as tangent curves of a 4D or 6D vector field. These higher-dimensional flows are difficult to map to lower-dimensional spaces, which makes their visualization a challenging problem. We focus on vector field topology, which allows scientists to study asymptotic particle behavior. As recent work on the 2D case has shown, both extraction and classification of isolated critical points depend on the underlying particle model. In this paper, we aim for a model-independent classification technique, which we apply to two different particle models in not only 2D, but also 3D cases. We show that the classification can be done by performing an eigenanalysis of the spatial derivatives' velocity subspace of the higher-dimensional 4D or 6D flow. We construct glyphs that depict not only the types of critical points, but also encode the directional information given by the eigenvectors. We show that the eigenvalues and eigenvectors of the inertial phase space have sufficient symmetries and structure so that they can be depicted in 2D or 3D, instead of 4D or 6D.
  • Item
    Decoupled Opacity Optimization for Points, Lines and Surfaces
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) GĂĽnther, Tobias; Theisel, Holger; Gross, Markus; Loic Barthe and Bedrich Benes
    Displaying geometry in flow visualization is often accompanied by occlusion problems, making it difficult to perceive information that is relevant in the respective application. In a recent technique, named opacity optimization, the balance of occlusion avoidance and the selection of meaningful geometry was recognized to be a view-dependent, global optimization problem. The method solves a bounded-variable least-squares problem, which minimizes energy terms for the reduction of occlusion, background clutter, adding smoothness and regularization. The original technique operates on an object-space discretization and was shown for line and surface geometry. Recently, it has been extended to volumes, where it was solved locally per ray by dropping the smoothness energy term and replacing it by pre-filtering the importance measure. In this paper, we pick up the idea of splitting the opacity optimization problem into two smaller problems. The first problem is a minimization with analytic solution, and the second problem is a smoothing of the obtained minimizer in object-space. Thereby, the minimization problem can be solved locally per pixel, making it possible to combine all geometry types (points, lines and surfaces) consistently in a single optimization framework. We call this decoupled opacity optimization and apply it to a number of steady 3D vector fields.
  • Item
    Diffusion Diagrams: Voronoi Cells and Centroids from Diffusion
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Herholz, Philipp; Haase, Felix; Alexa, Marc; Loic Barthe and Bedrich Benes
    We define Voronoi cells and centroids based on heat diffusion. These heat cells and heat centroids coincide with the common definitions in Euclidean spaces. On curved surfaces they compare favorably with definitions based on geodesics: they are smooth and can be computed in a stable way with a single linear solve. We analyze the numerics of this approach and can show that diffusion diagrams converge quadratically against the smooth case under mesh refinement, which is better than other common discretization of distance measures in curved spaces. By factorizing the system matrix in a preprocess, computing Voronoi diagrams or centroids amounts to just back-substitution. We show how to localize this operation so that the complexity is linear in the size of the cells and not the underlying mesh. We provide several example applications that show how to benefit from this approach.
  • Item
    Texture Stationarization: Turning Photos into Tileable Textures
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Moritz, Joep; James, Stuart; Haines, Tom S. F.; Ritschel, Tobias; Weyrich, Tim; Loic Barthe and Bedrich Benes
    Texture synthesis has grown into a mature field in computer graphics, allowing the synthesis of naturalistic textures and images from photographic exemplars. Surprisingly little work, however, has been dedicated to synthesizing tileable textures, that is, textures that when laid out in a regular grid of tiles form a homogeneous appearance suitable for use in memory-sensitive real-time graphics applications. One of the key challenges in doing so is that most natural input exemplars exhibit uneven spatial variations that, when tiled, show as repetitive patterns. We propose an approach to synthesize tileable textures while enforcing stationarity properties that effectively mask repetitions while maintaining the unique characteristics of the exemplar. We explore a number of alternative measures for texture stationarity and show how each measure can be integrated into a standard texture synthesis method (PatchMatch) to enforce stationarity at user-controlled scales. We demonstrate the efficacy of our approach using a database of 118 exemplar images, both from publicly available sources as well as new ones captured under uncontrolled conditions, and we quantitatively analyze alternative stationarity measures for their robustness across many test runs using different random seeds. In conclusion, we suggest a novel synthesis approach that employs local histogram matching to reliably turn input photographs of natural surfaces into tiles well suited for artifact-free tiling.
  • Item
    A Subjective Evaluation of Texture Synthesis Methods
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Kolár, Martin; Debattista, Kurt; Chalmers, Alan; Loic Barthe and Bedrich Benes
    This paper presents the results of a user study which quantifies the relative and absolute quality of example-based texture synthesis algorithms. In order to allow such evaluation, a list of texture properties is compiled, and a minimal representative set of textures is selected to cover these. Six texture synthesis methods are compared against each other and a reference on a selection of twelve textures by non-expert participants (N = 67). Results demonstrate certain algorithms successfully solve the problem of texture synthesis for certain textures, but there are no satisfactory results for other types of texture properties. The presented textures and results make it possible for future work to be subjectively compared, thus facilitating the development of future texture synthesis methods.
  • Item
    Analysis and Controlled Synthesis of Inhomogeneous Textures
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Zhou, Yang; Shi, Huajie; Lischinski, Dani; Gong, Minglun; Kopf, Johannes; Huang, Hui; Loic Barthe and Bedrich Benes
    Many interesting real-world textures are inhomogeneous and/or anisotropic. An inhomogeneous texture is one where various visual properties exhibit significant changes across the texture's spatial domain. Examples include perceptible changes in surface color, lighting, local texture pattern and/or its apparent scale, and weathering effects, which may vary abruptly, or in a continuous fashion. An anisotropic texture is one where the local patterns exhibit a preferred orientation, which also may vary across the spatial domain. While many example-based texture synthesis methods can be highly effective when synthesizing uniform (stationary) isotropic textures, synthesizing highly non-uniform textures, or ones with spatially varying orientation, is a considerably more challenging task, which so far has remained underexplored. In this paper, we propose a new method for automatic analysis and controlled synthesis of such textures. Given an input texture exemplar, our method generates a source guidance map comprising: (i) a scalar progression channel that attempts to capture the low frequency spatial changes in color, lighting, and local pattern combined, and (ii) a direction field that captures the local dominant orientation of the texture. Having augmented the texture exemplar with this guidance map, users can exercise better control over the synthesized result by providing easily specified target guidance maps, which are used to constrain the synthesis process.
  • Item
    ShapeGenetics: Using Genetic Algorithms for Procedural Modeling
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Haubenwallner, Karl; Seidel, Hans-Peter; Steinberger, Markus; Loic Barthe and Bedrich Benes
    In this paper, we show that genetic algorithms (GA) can be used to control the output of procedural modeling algorithms. We propose an efficient way to encode the choices that have to be made during a procedural generation as a hierarchical genome representation. In combination with mutation and reproduction operations specifically designed for controlled procedural modeling, our GA can evolve a population of individual models close to any high-level goal. Possible scenarios include a volume that should be filled by a procedurally grown tree or a painted silhouette that should be followed by the skyline of a procedurally generated city. These goals are easy to set up for an artist compared to the tens of thousands of variables that describe the generated model and are chosen by the GA. Previous approaches for controlled procedural modeling either use Reversible Jump Markov Chain Monte Carlo (RJMCMC) or Stochastically-Ordered Sequential Monte Carlo (SOSMC) as workhorse for the optimization. While RJMCMC converges slowly, requiring multiple hours for the optimization of larger models, it produces high quality models. SOSMC shows faster convergence under tight time constraints for many models, but can get stuck due to choices made in the early stages of optimization. Our GA shows faster convergence than SOSMC and generates better models than RJMCMC in the long run.
  • Item
    On Realism of Architectural Procedural Models
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Beneš, Jan; Kelly, Tom; Děchtěrenko, Filip; Křivánek, Jaroslav; Müller, Pascal; Loic Barthe and Bedrich Benes
    The goal of procedural modeling is to generate realistic content. The realism of this content is typically assessed by qualitatively evaluating a small number of results, or, less frequently, by conducting a user study. However, there is a lack of systematic treatment and understanding of what is considered realistic, both in procedural modeling and for images in general. We conduct a user study that primarily investigates the realism of procedurally generated buildings. Specifically, we investigate the role of fine and coarse details, and investigate which other factors contribute to the perception of realism. We find that realism is carried on different scales, and identify other factors that contribute to the realism of procedural and non-procedural buildings.
  • Item
    Geometric Stiffness for Real-time Constrained Multibody Dynamics
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Andrews, Sheldon; Teichmann, Marek; Kry, Paul G.; Loic Barthe and Bedrich Benes
    This paper focuses on the stable and efficient simulation of articulated rigid body systems for real-time applications. Specifically, we focus on the use of geometric stiffness, which can dramatically increase simulation stability. We examine several numerical problems with the inclusion of geometric stiffness in the equations of motion, as proposed by previous work, and address these issues by introducing a novel method for efficiently building the linear system. This offers improved tractability and numerical efficiency. Furthermore, geometric stiffness tends to significantly dissipate kinetic energy. We propose an adaptive damping scheme, inspired by the geometric stiffness, that uses a stability criterion based on the numerical integrator to determine the amount of non-constitutive damping required to stabilize the simulation. With this approach, not only is the dynamical behavior better preserved, but the simulation remains stable for mass ratios of 1,000,000-to-1 at time steps up to 0.1 s. We present a number of challenging scenarios to demonstrate that our method improves efficiency, and that it increases stability by orders of magnitude compared to previous work.
  • Item
    Fully Spectral Partial Shape Matching
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Litany, Or; RodolĂ , Emanuele; Bronstein, Alex M.; Bronstein, Michael M.; Loic Barthe and Bedrich Benes
    We propose an efficient procedure for calculating partial dense intrinsic correspondence between deformable shapes performed entirely in the spectral domain. Our technique relies on the recently introduced partial functional maps formalism and on the joint approximate diagonalization (JAD) of the Laplace-Beltrami operators previously introduced for matching non-isometric shapes. We show that a variant of the JAD problem with an appropriately modified coupling term (surprisingly) allows to construct quasi-harmonic bases localized on the latent corresponding parts. This circumvents the need to explicitly compute the unknown parts by means of the cumbersome alternating minimization used in the previous approaches, and allows performing all the calculations in the spectral domain with constant complexity independent of the number of shape vertices. We provide an extensive evaluation of the proposed technique on standard non-rigid correspondence benchmarks and show state-of-the-art performance in various settings, including partiality and the presence of topological noise.
  • Item
    Informative Descriptor Preservation via Commutativity for Shape Matching
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Nogneng, Dorian; Ovsjanikov, Maks; Loic Barthe and Bedrich Benes
    We consider the problem of non-rigid shape matching, and specifically the functional maps framework that was recently proposed to find correspondences between shapes. A key step in this framework is to formulate descriptor preservation constraints that help to encode the information (e.g., geometric or appearance) that must be preserved by the unknown map. In this paper, we show that considering descriptors as linear operators acting on functions through multiplication, rather than as simple scalar-valued signals, allows to extract significantly more information from a given descriptor and ultimately results in a more accurate functional map estimation. Namely, we show that descriptor preservation constraints can be formulated via commutativity with respect to the unknown map, which can be conveniently encoded by considering relations between matrices in the discrete setting. As a result, when the vector space spanned by the descriptors has a dimension smaller than that of the reduced basis, our optimization may still provide a fully-constrained system leading to accurate point-to-point correspondences, while previous methods might not. We demonstrate on a wide variety of experiments that our approach leads to significant improvement for functional map estimation by helping to reduce the number of necessary descriptor constraints by an order of magnitude, even given an increase in the size of the reduced basis.
  • Item
    DeepGarment: 3D Garment Shape Estimation from a Single Image
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Danerek, Radek; Dibra, Endri; Ă–ztireli, A. Cengiz; Ziegler, Remo; Gross, Markus; Loic Barthe and Bedrich Benes
    3D garment capture is an important component for various applications such as free-view point video, virtual avatars, online shopping, and virtual cloth fitting. Due to the complexity of the deformations, capturing 3D garment shapes requires controlled and specialized setups. A viable alternative is image-based garment capture. Capturing 3D garment shapes from a single image, however, is a challenging problem and the current solutions come with assumptions on the lighting, camera calibration, complexity of human or mannequin poses considered, and more importantly a stable physical state for the garment and the underlying human body. In addition, most of the works require manual interaction and exhibit high run-times. We propose a new technique that overcomes these limitations, making garment shape estimation from an image a practical approach for dynamic garment capture. Starting from synthetic garment shape data generated through physically based simulations from various human bodies in complex poses obtained through Mocap sequences, and rendered under varying camera positions and lighting conditions, our novel method learns a mapping from rendered garment images to the underlying 3D garment model. This is achieved by training Convolutional Neural Networks (CNN-s) to estimate 3D vertex displacements from a template mesh with a specialized loss function. We illustrate that this technique is able to recover the global shape of dynamic 3D garments from a single image under varying factors such as challenging human poses, self occlusions, various camera poses and lighting conditions, at interactive rates. Improvement is shown if more than one view is integrated. Additionally, we show applications of our method to videos.
  • Item
    Simulation-Ready Hair Capture
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Hu, Liwen; Bradley, Derek; Li, Hao; Beeler, Thabo; Loic Barthe and Bedrich Benes
    Physical simulation has long been the approach of choice for generating realistic hair animations in CG. A constant drawback of simulation, however, is the necessity to manually set the physical parameters of the simulation model in order to get the desired dynamic behavior. To alleviate this, researchers have begun to explore methods for reconstructing hair from the real world and even to estimate the corresponding simulation parameters through the process of inversion. So far, however, these methods have had limited applicability, because dynamic hair capture can only be played back without the ability to edit, and solving for simulation parameters can only be accomplished for static hairstyles, ignoring the dynamic behavior. We present the first method for capturing dynamic hair and automatically determining the physical properties for simulating the observed hairstyle in motion. Since our dynamic inversion is agnostic to the simulation model, the proposed method applies to virtually any hair simulation technique, which we demonstrate using two state-of-the-art hair simulation models. The output of our method is a fully simulation-ready hairstyle, consisting of both the static hair geometry as well as its physical properties. The hairstyle can be easily edited by adding additional external forces, changing the head motion, or re-simulating in completely different environments, all while remaining faithful to the captured hairstyle.
  • Item
    Multi-View Stereo on Consistent Face Topology
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Fyffe, Graham; Nagano, Koki; Huynh, Loc; Saito, Shunsuke; Busch, Jay; Jones, Andrew; Li, Hao; Debevec, Paul; Loic Barthe and Bedrich Benes
    We present a multi-view stereo reconstruction technique that directly produces a complete high-fidelity head model with consistent facial mesh topology. While existing techniques decouple shape estimation and facial tracking, our framework jointly optimizes for stereo constraints and consistent mesh parameterization. Our method is therefore free from drift and fully parallelizable for dynamic facial performance capture. We produce highly detailed facial geometries with artist-quality UV parameterization, including secondary elements such as eyeballs, mouth pockets, nostrils, and the back of the head. Our approach consists of deforming a common template model to match multi-view input images of the subject, while satisfying cross-view, cross-subject, and cross-pose consistencies using a combination of 2D landmark detection, optical flow, and surface and volumetric Laplacian regularization. Since the flow is never computed between frames, our method is trivially parallelized by processing each frame independently. Accurate rigid head pose is extracted using a PCA-based dimension reduction and denoising scheme. We demonstrate high-fidelity performance capture results with challenging head motion and complex facial expressions around eye and mouth regions. While the quality of our results is on par with the current state-of-the-art, our approach can be fully parallelized, does not suffer from drift, and produces face models with production-quality mesh topologies.
  • Item
    Makeup Lamps: Live Augmentation of Human Faces via Projection
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Bermano, Amit Haim; Billeter, Markus; Iwai, Daisuke; Grundhöfer, Anselm; Loic Barthe and Bedrich Benes
    We propose the first system for live dynamic augmentation of human faces. Using projector-based illumination, we alter the appearance of human performers during novel performances. The key challenge of live augmentation is latency - an image is generated according to a specific pose, but is displayed on a different facial configuration by the time it is projected. Therefore, our system aims at reducing latency during every step of the process, from capture, through processing, to projection. Using infrared illumination, an optically and computationally aligned high-speed camera detects facial orientation as well as expression. The estimated expression blendshapes are mapped onto a lower dimensional space, and the facial motion and non-rigid deformation are estimated, smoothed and predicted through adaptive Kalman filtering. Finally, the desired appearance is generated interpolating precomputed offset textures according to time, global position, and expression. We have evaluated our system through an optimized CPU and GPU prototype, and demonstrated successful low latency augmentation for different performers and performances with varying facial play and motion speed. In contrast to existing methods, the presented system is the first method which fully supports dynamic facial projection mapping without the requirement of any physical tracking markers and incorporates facial expressions.
  • Item
    Real-Time Multi-View Facial Capture with Synthetic Training
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Klaudiny, Martin; McDonagh, Steven; Bradley, Derek; Beeler, Thabo; Mitchell, Kenny; Loic Barthe and Bedrich Benes
    We present a real-time multi-view facial capture system facilitated by synthetic training imagery. Our method is able to achieve high-quality markerless facial performance capture in real-time from multi-view helmet camera data, employing an actor specific regressor. The regressor training is tailored to specified actor appearance and we further condition it for the expected illumination conditions and the physical capture rig by generating the training data synthetically. In order to leverage the information present in live imagery, which is typically provided by multiple cameras, we propose a novel multi-view regression algorithm that uses multi-dimensional random ferns. We show that higher quality can be achieved by regressing on multiple video streams than previous approaches that were designed to operate on only a single view. Furthermore, we evaluate possible camera placements and propose a novel camera configuration that allows to mount cameras outside the field of view of the actor, which is very beneficial as the cameras are then less of a distraction for the actor and allow for an unobstructed line of sight to the director and other actors. Our new real-time facial capture approach has immediate application in on-set virtual production, in particular with the ever-growing demand for motion-captured facial animation in visual effects and video games.
  • Item
    Gradient-based Steering for Vision-based Crowd Simulation Algorithms
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Dutra, Teofilo B.; Marques, Ricardo; Cavalcante-Neto, Joaquim Bento; Vidal, Creto A.; Pettré, Julien; Loic Barthe and Bedrich Benes
    Most recent crowd simulation algorithms equip agents with a synthetic vision component for steering. They offer promising perspectives through a more realistic simulation of the way humans navigate according to their perception of the surrounding environment. In this paper, we propose a new perception/motion loop to steering agents along collision free trajectories that significantly improves the quality of vision-based crowd simulators. In contrast with solutions where agents avoid collisions in a purely reactive (binary) way, we suggest exploring the full range of possible adaptations and retaining the locally optimal one. To this end, we introduce a cost function, based on perceptual variables, which estimates an agent's situation considering both the risks of future collision and a desired destination. We then compute the partial derivatives of that function with respect to all possible motion adaptations. The agent then adapts its motion by following the gradient. This paper has thus two main contributions: the definition of a general purpose control scheme for steering synthetic vision-based agents; and the proposition of cost functions for evaluating the perceived danger of the current situation. We demonstrate improvements in several cases.
  • Item
    Sparse Inertial Poser: Automatic 3D Human Pose Estimation from Sparse IMUs
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Marcard, Timo von; Rosenhahn, Bodo; Black, Michael J.; Pons-Moll, Gerard; Loic Barthe and Bedrich Benes
    We address the problem of making human motion capture in the wild more practical by using a small set of inertial sensors attached to the body. Since the problem is heavily under-constrained, previous methods either use a large number of sensors, which is intrusive, or they require additional video input. We take a different approach and constrain the problem by: (i) making use of a realistic statistical body model that includes anthropometric constraints and (ii) using a joint optimization framework to fit the model to orientation and acceleration measurements over multiple frames. The resulting tracker Sparse Inertial Poser (SIP) enables motion capture using only 6 sensors (attached to the wrists, lower legs, back and head) and works for arbitrary human motions. Experiments on the recently released TNT15 dataset show that, using the same number of sensors, SIP achieves higher accuracy than the dataset baseline without using any video data.We further demonstrate the effectiveness of SIP on newly recorded challenging motions in outdoor scenarios such as climbing or jumping over a wall.
  • Item
    Learning Detail Transfer based on Geometric Features
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Berkiten, Sema; Halber, Maciej; Solomon, Justin; Ma, Chongyang; Li, Hao; Rusinkiewicz, Szymon; Loic Barthe and Bedrich Benes
    The visual richness of computer graphics applications is frequently limited by the difficulty of obtaining high-quality, detailed 3D models. This paper proposes a method for realistically transferring details (specifically, displacement maps) from existing high-quality 3D models to simple shapes that may be created with easy-to-learn modeling tools. Our key insight is to use metric learning to find a combination of geometric features that successfully predicts detail-map similarities on the source mesh; we use the learned feature combination to drive the detail transfer. The latter uses a variant of multi-resolution non-parametric texture synthesis, augmented by a high-frequency detail transfer step in texture space. We demonstrate that our technique can successfully transfer details among a variety of shapes including furniture and clothing.
  • Item
    Chamber Recognition in Cave Data Sets
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Schertler, Nico; Buchroithner, Manfred; Gumhold, Stefan; Loic Barthe and Bedrich Benes
    Quantitative analysis of cave systems represented as 3D models is becoming more and more important in the field of cave sciences. One open question is the rigorous identification of chambers in a data set, which has a deep impact on subsequent analysis steps such as size calculation. This affects the international recognition of a cave since especially record-holding caves bear significant tourist attraction potential. In the past, chambers have been identified manually, without any clear definition or guidance. While experts agree on core parts of chambers in general, their opinions may differ in more controversial areas. Since this process is heavily subjective, it is not suited for objective quantitative comparison of caves. Therefore, we present a novel fully-automatic curve skeleton-based chamber recognition algorithm that has been derived from requirements from field experts. We state the problem as a binary labeling problem on a curve skeleton and find a solution through energy minimization. A thorough evaluation of our results with the help of expert feedback showed that our algorithm matches real-world requirements very closely and is thus suited as the foundation for any quantitative cave analysis system.
  • Item
    Performance-Based Biped Control using a Consumer Depth Camera
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Lee, Yoonsang; Kwon, Taesoo; Loic Barthe and Bedrich Benes
    We present a technique for controlling physically simulated characters using user inputs from an off-the-shelf depth camera. Our controller takes a real-time stream of user poses as input, and simulates a stream of target poses of a biped based on it. The simulated biped mimics the user's actions while moving forward at a modest speed and maintaining balance. The controller is parameterized over a set of modulated reference motions that aims to cover the range of possible user actions. For real-time simulation, the best set of control parameters for the current input pose is chosen from the parameterized sets of pre-computed control parameters via a regression method. By applying the chosen parameters at each moment, the simulated biped can imitate a range of user actions while walking in various interactive scenarios.
  • Item
    Consistent Video Filtering for Camera Arrays
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Bonneel, Nicolas; Tompkin, James; Sun, Deqing; Wang, Oliver; Sunkavalli, Kalyan; Paris, Sylvain; Pfister, Hanspeter; Loic Barthe and Bedrich Benes
    Visual formats have advanced beyond single-view images and videos: 3D movies are commonplace, researchers have developed multi-view navigation systems, and VR is helping to push light field cameras to mass market. However, editing tools for these media are still nascent, and even simple filtering operations like color correction or stylization are problematic: naively applying image filters per frame or per view rarely produces satisfying results due to time and space inconsistencies. Our method preserves and stabilizes filter effects while being agnostic to the inner working of the filter. It captures filter effects in the gradient domain, then uses input frame gradients as a reference to impose temporal and spatial consistency. Our least-squares formulation adds minimal overhead compared to naive data processing. Further, when filter cost is high, we introduce a filter transfer strategy that reduces the number of per-frame filtering computations by an order of magnitude, with only a small reduction in visual quality. We demonstrate our algorithm on several camera array formats including stereo videos, light fields, and wide baselines.
  • Item
    Practical Capture and Reproduction of Phosphorescent Appearance
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Nalbach, Oliver; Seidel, Hans-Peter; Ritschel, Tobias; Loic Barthe and Bedrich Benes
    This paper proposes a pipeline to accurately acquire, efficiently reproduce and intuitively manipulate phosphorescent appearance. In contrast to common appearance models, a model of phosphorescence needs to account for temporal change (decay) and previous illumination (saturation). For reproduction, we propose a rate equation that can be efficiently solved in combination with other illumination in a mixed integro-differential equation system. We describe an acquisition system to measure spectral coefficients of this rate equation for actual materials. Our model is evaluated by comparison to photographs of actual phosphorescent objects. Finally, we propose an artist-friendly interface to control the behavior of phosphorescent materials by specifying spatio-temporal appearance constraints.
  • Item
    STD: Student's t-Distribution of Slopes for Microfacet Based BSDFs
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Ribardière, Mickael; Bringier, Benjamin; Meneveaux, Daniel; Simonot, Lionel; Loic Barthe and Bedrich Benes
    This paper focuses on microfacet reflectance models, and more precisely on the definition of a new and more general distribution function, which includes both Beckmann's and GGX distributions widely used in the computer graphics community. Therefore, our model makes use of an additional parameter g, which controls the distribution function slope and tail height. It actually corresponds to a bivariate Student's t-distribution in slopes space and it is presented with the associated analytical formulation of the geometric attenuation factor derived from Smith representation.We also provide the analytical derivations for importance sampling isotropic and anisotropic materials. As shown in the results, this new representation offers a finer control of a wide range of materials, while extending the capabilities of fitting parameters with captured data.
  • Item
    Hybrid Mesh-volume LoDs for All-scale Pre-filtering of Complex 3D Assets
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Loubet, Guillaume; Neyret, Fabrice; Loic Barthe and Bedrich Benes
    We address the problem of constructing appearance-preserving level of details (LoDs) of complex 3D models such as trees. We propose a hybrid method that combines the strengths of mesh and volume representations. Our main idea is to separate macroscopic (i.e. larger than the target spatial resolution) and microscopic (sub-resolution) surfaces at each scale and to treat them differently, because meshes are very efficient at representing macroscopic surfaces while sub-resolution geometry benefits from volumetric approximations. We introduce a new algorithm that detects the macroscopic surfaces of a mesh for a given resolution. We simplify these surfaces with edge collapses and we provide a method for pre-filtering their normal distributions and albedos. To approximate microscopic details, we use a heterogeneous microflake participating medium and we introduce a new artifact-free voxelization algorithm that preserves local occlusion. Thanks to our macroscopic surface analysis, our algorithm is fully automatic and it generates seamless LoDs at arbitrarily coarse resolutions for a wide range of 3D models.
  • Item
    Spatial Adjacency Maps for Translucency Simulation under General Illumination
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Maisch, Sebastian; Ropinski, Timo; Loic Barthe and Bedrich Benes
    Rendering translucent materials in real time is usually done by using surface diffusion and/or (translucent) shadow maps. The downsides of these approaches are, that surface diffusion cannot handle translucency effects that show up when rendering thin objects, and that translucent shadow maps are only available for point light sources. Furthermore, translucent shadow maps introduce limitations to shadow mapping techniques exploiting the same maps. In this paper we present a novel approach for rendering translucent materials at interactive frame rates. Our approach allows for an efficient calculation of translucency with native support for general illumination conditions, especially area and environment lighting, at high accuracy. The proposed technique's only parameter is the used diffusion profile, and thus it works out of the box without any parameter tuning. Furthermore, it can be used in combination with any existing surface diffusion techniques to add translucency effects. Our approach introduces Spatial Adjacency Maps that depend on precalculations to be done for fixed meshes. We show that these maps can be updated in real time to also handle deforming meshes and that our results are of superior quality as compared to other well known real-time techniques for rendering translucency.
  • Item
    Zooming on all Actors: Automatic Focus+Context Split Screen Video Generation
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Kumar, Moneish; Gandhi, Vineet; Ronfard, RĂ©mi; Gleicher, Michael; Loic Barthe and Bedrich Benes
    Recordings of stage performances are easy to capture with a high-resolution camera, but are difficult to watch because the actors' faces are too small. We present an approach to automatically create a split screen video that transforms these recordings to show both the context of the scene as well as close-up details of the actors. Given a static recording of a stage performance and tracking information about the actors positions, our system generates videos showing a focus+context view based on computed close-up camera motions using crop-and zoom. The key to our approach is to compute these camera motions such that they are cinematically valid close-ups and to ensure that the set of views of the different actors are properly coordinated and presented. We pose the computation of camera motions as convex optimization that creates detailed views and smooth movements, subject to cinematic constraints such as not cutting faces with the edge of the frame. Additional constraints link the close up views of each actor, causing them to merge seamlessly when actors are close. Generated views are placed in a resulting layout that preserves the spatial relationships between actors. We demonstrate our results on a variety of staged theater and dance performances.
  • Item
    Flicker Observer Effect: Guiding Attention Through High Frequency Flicker in Images
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Waldin, Nicholas; Waldner, Manuela; Viola, Ivan; Loic Barthe and Bedrich Benes
    Drawing the user's gaze to an important item in an image or a graphical user interface is a common challenge. Usually, some form of highlighting is used, such as a clearly distinct color or a border around the item. Flicker can also be very salient, but is often perceived as annoying. In this paper, we explore high frequency flicker (60 to 72 Hz) to guide the user's attention in an image. At such high frequencies, the critical flicker frequency (CFF) threshold is reached, which makes the flicker appear to fuse into a stable signal. However, the CFF is not uniform across the visual field, but is higher in the peripheral vision at normal lighting conditions. Through experiments, we show that high frequency flicker can be easily detected by observers in the peripheral vision, but the signal is hardly visible in the foveal vision when users directly look at the flickering patch. We demonstrate that this property can be used to draw the user's attention to important image regions using a standard high refresh-rate computer monitor with minimal visible modifications to the image. In an uncalibrated visual search task, users could in a crowded image easily spot the specified search targets flickering with very high frequency. They also reported that high frequency flicker was distracting when they had to attend to another region, while it was hardly noticeable when looking at the flickering region itself.
  • Item
    GPU Ray Tracing using Irregular Grids
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Pérard-Gayot, Arsène; Kalojanov, Javor; Slusallek, Philipp; Loic Barthe and Bedrich Benes
    We present a spatial index structure to accelerate ray tracing on GPUs. It is a flat, non-hierarchical spatial subdivision of the scene into axis aligned cells of varying size. In order to construct it, we first nest an octree into each cell of a uniform grid. We then apply two optimization passes to increase ray traversal performance: First, we reduce the expected cost for ray traversal by merging cells together. This adapts the structure to complex primitive distributions, solving the "teapot in a stadium" problem. Second, we decouple the cell boundaries used during traversal for rays entering and exiting a given cell. This allows us to extend the exiting boundaries over adjacent cells that are either empty or do not contain additional primitives. Now, exiting rays can skip empty space and avoid repeating intersection tests. Finally, we demonstrate that in addition to the fast ray traversal performance, the structure can be rebuilt efficiently in parallel, allowing for ray tracing dynamic scenes.
  • Item
    Parallel BVH Construction using Progressive Hierarchical Refinement
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Hendrich, Jakub; Meister, Daniel; Bittner, Jiří; Loic Barthe and Bedrich Benes
    We propose a novel algorithm for construction of bounding volume hierarchies (BVHs) for multi-core CPU architectures. The algorithm constructs the BVH by a divisive top-down approach using a progressively refined cut of an existing auxiliary BVH. We propose a new strategy for refining the cut that significantly reduces the workload of individual steps of BVH construction. Additionally, we propose a new method for integrating spatial splits into the BVH construction algorithm. The auxiliary BVH is constructed using a very fast method such as LBVH based on Morton codes. We show that the method provides a very good trade-off between the build time and ray tracing performance. We evaluated the method within the Embree ray tracing framework and show that it compares favorably with the Embree BVH builders regarding build time while maintaining comparable ray tracing speed.
  • Item
    A GPU-Adapted Structure for Unstructured Grids
    (The Eurographics Association and John Wiley & Sons Ltd., 2017) Zayer, Rhaleb; Steinberger, Markus; Seidel, Hans-Peter; Loic Barthe and Bedrich Benes
    A key advantage of working with structured grids (e.g., images) is the ability to directly tap into the powerful machinery of linear algebra. This is not much so for unstructured grids where intermediate bookkeeping data structures stand in the way. On modern high performance computing hardware, the conventional wisdom behind these intermediate structures is further challenged by costly memory access, and more importantly by prohibitive memory resources on environments such as graphics hardware. In this paper, we bypass this problem by introducing a sparse matrix representation for unstructured grids which not only reduces the memory storage requirements but also cuts down on the bulk of data movement from global storage to the compute units. In order to take full advantage of the proposed representation, we augment ordinary matrix multiplication by means of action maps, local maps which encode the desired interaction between grid vertices. In this way, geometric computations and topological modifications translate into concise linear algebra operations. In our algorithmic formulation, we capitalize on the nature of sparse matrix-vector multiplication which allows avoiding explicit transpose computation and storage. Furthermore, we develop an efficient vectorization to the demanding assembly process of standard graph and finite element matrices.