48 results
Search Results
Now showing 1 - 10 of 48
Item Microtiles: Extracting Building Blocks from Correspondences(The Eurographics Association and Blackwell Publishing Ltd., 2012) Kalojanov, Javor; Bokeloh, Martin; Wand, Michael; Guibas, Leonidas; Seidel, Hans-Peter; Slusallek, Philipp; Eitan Grinspun and Niloy MitraIn this paper, we develop a theoretical framework for characterizing shapes by building blocks. We address two questions: First, how do shape correspondences induce building blocks? For this, we introduce a new representation for structuring partial symmetries (partial self-correspondences), which we call "microtiles". Starting from input correspondences that form point-wise equivalence relations, microtiles are obtained by grouping connected components of points that share the same set of symmetry transformations. The decomposition is unique, requires no parameters beyond the input correspondences, and encodes the partial symmetries of all subsets of the input. The second question is: What is the class of shapes that can be assembled from these building blocks? Here, we specifically consider r-similarity as correspondence model, i.e., matching of local r-neighborhoods. Our main result is that the microtiles of the partial r-symmetries of an object S can build all objects that are (r+e)-similar to S for any e>0. Again, the construction is unique. Furthermore, we give necessary conditions for a set of assembly rules for the pairwise connection of tiles. We describe a practical algorithm for computing microtile decompositions under rigid motions, a corresponding prototype implementation, and conduct a number of experiments to visualize the structural properties in practice.Item Optimizing Disparity for Motion in Depth(The Eurographics Association and Blackwell Publishing Ltd., 2013) Kellnhofer, Petr; Ritschel, Tobias; Myszkowski, Karol; Seidel, Hans-Peter; Nicolas Holzschuch and Szymon RusinkiewiczBeyond the careful design of stereo acquisition equipment and rendering algorithms, disparity post-processing has recently received much attention, where one of the key tasks is to compress the originally large disparity range to avoid viewing discomfort. The perception of dynamic stereo content however, relies on reproducing the full disparity-time volume that a scene point undergoes in motion. This volume can be strongly distorted in manipulation, which is only concerned with changing disparity at one instant in time, even if the temporal coherence of that change is maintained. We propose an optimization to preserve stereo motion of content that was subject to an arbitrary disparity manipulation, based on a perceptual model of temporal disparity changes. Furthermore, we introduce a novel 3D warping technique to create stereo image pairs that conform to this optimized disparity map. The paper concludes with perceptual studies of motion reproduction quality and task performance in a simple game, showing how our optimization can achieve both viewing comfort and faithful stereo motion.Item Interactive Motion Mapping for Real-time Character Control(The Eurographics Association and John Wiley and Sons Ltd., 2014) Rhodin, Helge; Tompkin, James; Kim, Kwang In; Varanasi, Kiran; Seidel, Hans-Peter; Theobalt, Christian; B. Levy and J. KautzAbstract It is now possible to capture the 3D motion of the human body on consumer hardware and to puppet in real time skeleton-based virtual characters. However, many characters do not have humanoid skeletons. Characters such as spiders and caterpillars do not have boned skeletons at all, and these characters have very different shapes and motions. In general, character control under arbitrary shape and motion transformations is unsolved - how might these motions be mapped? We control characters with a method which avoids the rigging-skinning pipeline - source and target characters do not have skeletons or rigs. We use interactively-defined sparse pose correspondences to learn a mapping between arbitrary 3D point source sequences and mesh target sequences. Then, we puppet the target character in real time. We demonstrate the versatility of our method through results on diverse virtual characters with different input motion controllers. Our method provides a fast, flexible, and intuitive interface for arbitrary motion mapping which provides new ways to control characters for real-time animation.Item Deep Shading: Convolutional Neural Networks for Screen Space Shading(The Eurographics Association and John Wiley & Sons Ltd., 2017) Nalbach, Oliver; Arabadzhiyska, Elena; Mehta, Dushyant; Seidel, Hans-Peter; Ritschel, Tobias; Zwicker, Matthias and Sander, PedroIn computer vision, convolutional neural networks (CNNs) achieve unprecedented performance for inverse problems where RGB pixel appearance is mapped to attributes such as positions, normals or reflectance. In computer graphics, screen space shading has boosted the quality of real-time rendering, converting the same kind of attributes of a virtual scene back to appearance, enabling effects like ambient occlusion, indirect light, scattering and many more. In this paper we consider the diagonal problem: synthesizing appearance from given per-pixel attributes using a CNN. The resulting Deep Shading renders screen space effects at competitive quality and speed while not being programmed by human experts but learned from example images.Item Efficient Multi-image Correspondences for On-line Light Field Video Processing(The Eurographics Association and John Wiley & Sons Ltd., 2016) Dąbała, Łukasz; Ziegler, Matthias; Didyk, Piotr; Zilly, Frederik; Keinert, Joachim; Myszkowski, Karol; Seidel, Hans-Peter; Rokita, Przemysław; Ritschel, Tobias; Eitan Grinspun and Bernd Bickel and Yoshinori DobashiLight field videos express the entire visual information of an animated scene, but their shear size typically makes capture, processing and display an off-line process, i. e., time between initial capture and final display is far from real-time. In this paper we propose a solution for one of the key bottlenecks in such a processing pipeline, which is a reliable depth reconstruction possibly for many views. This is enabled by a novel correspondence algorithm converting the video streams from a sparse array of off-the-shelf cameras into an array of animated depth maps. The algorithm is based on a generalization of the classic multi-resolution Lucas-Kanade correspondence algorithm from a pair of images to an entire array. Special inter-image confidence consolidation allows recovery from unreliable matching in some locations and some views. It can be implemented efficiently in massively parallel hardware, allowing for interactive computations. The resulting depth quality as well as the computation performance compares favorably to other state-of-the art light field-to-depth approaches, as well as stereo matching techniques. Another outcome of this work is a data set of light field videos that are captured with multiple variants of sparse camera arrays.Item Perceptually-motivated Stereoscopic Film Grain(The Eurographics Association and John Wiley and Sons Ltd., 2014) Templin, Krzysztof; Didyk, Piotr; Myszkowski, Karol; Seidel, Hans-Peter; J. Keyser, Y. J. Kim, and P. WonkaIndependent management of film grain in each view of a stereoscopic video can lead to visual discomfort. The existing alternative is to project the grain onto the scene geometry. Such grain, however, looks unnatural, changes object perception, and emphasizes inaccuracies in depth arising during 2D-to-3D conversion. We propose an advanced method of grain positioning that scatters the grain in the scene space. In a series of perceptual experiments, we estimate the optimal parameter values for the proposed method, analyze the user preference distribution among the proposed and the two existing methods, and show influence of the method on the object perception.Item Interactive Modeling of Cellular Structures on Surfaces with Application to Additive Manufacturing(The Eurographics Association and John Wiley & Sons Ltd., 2020) Stadlbauer, Pascal; Mlakar, Daniel; Seidel, Hans-Peter; Steinberger, Markus; Zayer, Rhaleb; Panozzo, Daniele and Assarsson, UlfThe rich and evocative patterns of natural tessellations endow them with an unmistakable artistic appeal and structural properties which are echoed across design, production, and manufacturing. Unfortunately, interactive control of such patterns-as modeled by Voronoi diagrams, is limited to the simple two dimensional case and does not extend well to freeform surfaces. We present an approach for direct modeling and editing of such cellular structures on surface meshes. The overall modeling experience is driven by a set of editing primitives which are efficiently implemented on graphics hardware. We feature a novel application for 3D printing on modern support-free additive manufacturing platforms. Our method decomposes the input surface into a cellular skeletal structure which hosts a set of overlay shells. In this way, material saving can be channeled to the shells while structural stability is channeled to the skeleton. To accommodate the available printer build volume, the cellular structure can be further split into moderately sized parts. Together with shells, they can be conveniently packed to save on production time. The assembly of the printed parts is streamlined by a part numbering scheme which respects the geometric layout of the input model.Item SnakeBinning: Efficient Temporally Coherent Triangle Packing for Shading Streaming(The Eurographics Association and John Wiley & Sons Ltd., 2021) Hladky, Jozef; Seidel, Hans-Peter; Steinberger, Markus; Mitra, Niloy and Viola, IvanStreaming rendering, e.g., rendering in the cloud and streaming via a mobile connection, suffers from increased latency and unreliable connections. High quality framerate upsampling can hide these issues, especially when capturing shading into an atlas and transmitting it alongside geometric information. The captured shading information must consider triangle footprints and temporal stability to ensure efficient video encoding. Previous approaches only consider either temporal stability or sample distributions, but none focuses on both. With SnakeBinning, we present an efficient triangle packing approach that adjusts sample distributions and caters for temporal coherence. Using a multi-dimensional binning approach, we enforce tight packing among triangles while creating optimal sample distributions. Our binning is built on top of hardware supported real-time rendering where bins are mapped to individual pixels in a virtual framebuffer. Fragment shader interlock and atomic operations enforce global ordering of triangles within each bin, and thus temporal coherence according to the primitive order is achieved. Resampling the bin distribution guarantees high occupancy among all bins and a dense atlas packing. Shading samples are directly captured into the atlas using a rasterization pass, adjusting samples for perspective effects and creating a tight packing. Comparison to previous atlas packing approaches shows that our approach is faster than previous work and achieves the best sample distributions while maintaining temporal coherence. In this way, SnakeBinning achieves the highest rendering quality under equal atlas memory requirements. At the same time, its temporal coherence ensures that we require equal or less bandwidth than previous state-of-the-art. As SnakeBinning outperforms previous approach in all relevant aspects, it is the preferred choice for texture-based streaming rendering.Item Manipulating Refractive and Reflective Binocular Disparity(The Eurographics Association and John Wiley and Sons Ltd., 2014) Dabala, Lukasz; Kellnhofer, Petr; Ritschel, Tobias; Didyk, Piotr; Templin, Krzysztof; Myszkowski, Karol; Rokita, P.; Seidel, Hans-Peter; B. Levy and J. KautzPresenting stereoscopic content on 3D displays is a challenging task, usually requiring manual adjustments. A number of techniques have been developed to aid this process, but they account for binocular disparity of surfaces that are diffuse and opaque only. However, combinations of transparent as well as specular materials are common in the real and virtual worlds, and pose a significant problem. For example, excessive disparities can be created which cannot be fused by the observer. Also, multiple stereo interpretations become possible, e. g., for glass, that both reflects and refracts, which may confuse the observer and result in poor 3D experience. In this work, we propose an efficient method for analyzing and controlling disparities in computer-generated images of such scenes where surface positions and a layer decomposition are available. Instead of assuming a single per-pixel disparity value, we estimate all possibly perceived disparities at each image location. Based on this representation, we define an optimization to find the best per-pixel camera parameters, assuring that all disparities can be easily fused by a human. A preliminary perceptual study indicates, that our approach combines comfortable viewing with realistic depiction of typical specular scenes.Item NoRM: No-Reference Image Quality Metric for Realistic Image Synthesis(The Eurographics Association and John Wiley and Sons Ltd., 2012) Herzog, Robert; Cadík, Martin; Aydin, Tunç O.; Kim, Kwang In; Myszkowski, Karol; Seidel, Hans-Peter; P. Cignoni and T. ErtlSynthetically generating images and video frames of complex 3D scenes using some photo-realistic rendering software is often prone to artifacts and requires expert knowledge to tune the parameters. The manual work required for detecting and preventing artifacts can be automated through objective quality evaluation of synthetic images. Most practical objective quality assessment methods of natural images rely on a ground-truth reference, which is often not available in rendering applications. While general purpose no-reference image quality assessment is a difficult problem, we show in a subjective study that the performance of a dedicated no-reference metric as presented in this paper can match the state-of-the-art metrics that do require a reference. This level of predictive power is achieved exploiting information about the underlying synthetic scene (e.g., 3D surfaces, textures) instead of merely considering color, and training our learning framework with typical rendering artifacts. We show that our method successfully detects various non-trivial types of artifacts such as noise and clamping bias due to insufficient virtual point light sources, and shadow map discretization artifacts. We also briefly discuss an inpainting method for automatic correction of detected artifacts.