33-Issue 8
https://diglib.eg.org:443/handle/10.2312/11790
Regular Issue2024-03-29T09:08:43ZInteractive Diffraction from Biological Nanostructures
https://diglib.eg.org:443/handle/10.1111/v33i8pp177-188
Interactive Diffraction from Biological Nanostructures
Dhillon, D. S.; Teyssier, J.; Single, M.; Gaponenko, I.; Milinkovitch, M. C.; Zwicker, M.
Oliver Deussen and Hao (Richard) Zhang
We describe a technique for interactive rendering of diffraction effects produced by biological nanostructures, such as snake skin surface gratings. Our approach uses imagery from atomic force microscopy that accurately captures the geometry of the nanostructures responsible for structural colouration, that is, colouration due to wave interference, in a variety of animals. We develop a rendering technique that constructs bidirectional reflection distribution functions (BRDFs) directly from the measured data and leverages pre‐computation to achieve interactive performance. We demonstrate results of our approach using various shapes of the surface grating nanostructures. Finally, we evaluate the accuracy of our pre‐computation‐based technique and compare to a reference BRDF construction techniqueWe describe a technique for interactive rendering of diffraction effects produced by biological nanostructures, such as snake skin surface gratings. Our approach directly uses imagery from atomic force microscopy that accurately captures the geometry of the nanostructures responsible for structural colouration, that is, colouration due to wave interference, in a variety of animals.
2014-01-01T00:00:00ZOptimized Generation of Stereoscopic CGI Films by 3D Image Warping
https://diglib.eg.org:443/handle/10.1111/v33i8pp145-156
Optimized Generation of Stereoscopic CGI Films by 3D Image Warping
Noguera, José M.; Rueda, Antonio J.; Espada, Miguel A.; Martín, Máximo
Oliver Deussen and Hao (Richard) Zhang
The generation of a stereoscopic animation film requires doubling the rendering times and hence the cost. In this paper, we address this problem and propose an automatic system for generating a stereo pair from a given image and its depth map. Although several solutions exist in the literature, the high standards of image quality required in the context of a professional animation studio forced us to develop specially crafted algorithms that avoid artefacts caused by occlusions, anti‐aliasing filters, etc. This paper describes all the algorithms involved in our system and provides their GPU implementation. The proposed system has been tested with real‐life working scenarios. Our experiments show that the second view of the stereoscopic pair can be computed with as little as 15% of the effort of the original image while guaranteeing a similar quality.The generation of a stereoscopic animation film requires doubling the rendering times and hence the cost. In this paper, we address this problem and propose an automatic system for generating a stereo pair from a given image and its depth map. Although several solutions exist in the literature, the high standards of image quality required in the context of a professional animation studio forced us to develop specially crafted algorithms that avoid artefacts caused by occlusions, anti‐aliasing filters, etc.
2014-01-01T00:00:00ZObject Repositioning Based on the Perspective in a Single Image
https://diglib.eg.org:443/handle/10.1111/v33i8pp157-166
Object Repositioning Based on the Perspective in a Single Image
Iizuka, S.; Endo, Y.; Hirose, M.; Kanamori, Y.; Mitani, J.; Fukui, Y.
Oliver Deussen and Hao (Richard) Zhang
We propose an image editing system for repositioning objects in a single image based on the perspective of the scene. In our system, an input image is transformed into a layer structure that is composed of object layers and a background layer, and then the scene depth is computed from the ground region that is specified by the user using a simple boundary line. The object size and order of overlapping are automatically determined during the reposition based on the scene depth. In addition, our system enables the user to move shadows along with objects naturally by extracting the shadow mattes using only a few user‐specified scribbles. Finally, we demonstrate the versatility of our system through applications to depth‐of‐field effects, fog synthesis and 3D walkthrough in an image.We propose an image editing system for repositioning objects in a single image based on the perspective of the scene. In our system, an input image is transformed into a layer structure that is composed of object layers and a background layer, and then the scene depth is computed from the ground region that is specified by the user using a simple boundary line. The object size and order of overlapping are automatically determined during the reposition based on the scene depth. In addition, our system enables the user to move shadows along with objects naturally by extracting the shadow mattes using only a few user‐specified scribbles.
2014-01-01T00:00:00ZHigher Order Ray Marching
https://diglib.eg.org:443/handle/10.1111/v33i8pp167-176
Higher Order Ray Marching
Muñoz, Adolfo
Oliver Deussen and Hao (Richard) Zhang
Rendering participating media is still a challenging and time consuming task. In such media light interacts at every differential point of its path. Several rendering algorithms are based on ray marching: dividing the path of light into segments and calculating interactions at each of them. In this work, we revisit and analyze ray marching both as a quadrature integrator and as an initial value problem solver, and apply higher order adaptive solvers that ensure several interesting properties, such as faster convergence, adaptiveness to the mathematical definition of light transport and robustness to singularities. We compare several numerical methods, including standard ray marching and Monte Carlo integration, and illustrate the benefits of different solvers for a variety of scenes. Any participating media rendering algorithm that is based on ray marching may benefit from the application of our approach by reducing the number of needed samples (and therefore, rendering time) and increasing accuracy.Rendering participating media is still a challenging and time consuming task. In such media light interacts at every differential point of its path. Several rendering algorithms are based on ray marching: dividing the path of light into segments and calculating interactions at each of them. In this work, we revisit and analyze ray marching both as a quadrature integrator and as an initial value problem solver, and apply higher order adaptive solvers that ensure several interesting properties, such as faster convergence, adaptiveness to the mathematical definition of light transport and robustness to singularities. We compare several numerical methods, including standard ray marching and Monte Carlo integration, and illustrate the benefits of different solvers for a variety of scenes.
2014-01-01T00:00:00Z