95 results
Search Results
Now showing 1 - 10 of 95
Item Seamless and Aligned Texture Optimization for 3D Reconstruction(The Eurographics Association and John Wiley & Sons Ltd., 2024) Wang, Lei; Ge, Linlin; Zhang, Qitong; Feng, Jieqing; Chen, Renjie; Ritschel, Tobias; Whiting, EmilyRestoring the appearance of the model is a crucial step for achieving realistic 3D reconstruction. High-fidelity textures can also conceal some geometric defects. Since the estimated camera parameters and reconstructed geometry usually contain errors, subsequent texture mapping often suffers from undesirable visual artifacts such as blurring, ghosting, and visual seams. In particular, significant misalignment between the reconstructed model and the registered images will lead to texturing the mesh with inconsistent image regions. However, eliminating various artifacts to generate high-quality textures remains a challenge. In this paper, we address this issue by designing a texture optimization method to generate seamless and aligned textures for 3D reconstruction. The main idea is to detect misalignment regions between images and geometry and exclude them from texture mapping. To handle the texture holes caused by these excluded regions, a cross-patch texture hole-filling method is proposed, which can also synthesize plausible textures for invisible faces. Moreover, for better stitching of the textures from different views, an improved camera pose optimization is present by introducing color adjustment and boundary point sampling. Experimental results show that the proposed method can eliminate the artifacts caused by inaccurate input data robustly and produce highquality texture results compared with state-of-the-art methods.Item Real-time Seamless Object Space Shading(The Eurographics Association, 2024) Li, Tianyu; Guo, Xiaoxin; Hu, Ruizhen; Charalambous, PanayiotisObject space shading remains a challenging problem in real-time rendering due to runtime overhead and object parameterization limitations. While the recently developed algorithm by Baker et al. [BJ22] enables high-performance real-time object space shading, it still suffers from seam artifacts. In this paper, we introduce an innovative object space shading system leveraging a virtualized per-halfedge texturing schema to obviate excessive shading and preclude texture seam artifacts. Moreover, we implement ReSTIR GI on our system (see Figure 1), removing the necessity of temporally reprojecting shading samples and improving the convergence of areas of disocclusion. Our system yields superior results in terms of both efficiency and visual fidelity.Item Entropy-driven Progressive Compression of 3D Point Clouds(The Eurographics Association and John Wiley & Sons Ltd., 2024) Zampieri, Armand; Delarue, Guillaume; Bakr, Nachwa Abou; Alliez, Pierre; Hu, Ruizhen; Lefebvre, Sylvain3D point clouds stand as one of the prevalent representations for 3D data, offering the advantage of closely aligning with sensing technologies and providing an unbiased representation of a measured physical scene. Progressive compression is required for real-world applications operating on networked infrastructures with restricted or variable bandwidth. We contribute a novel approach that leverages a recursive binary space partition, where the partitioning planes are not necessarily axis-aligned and optimized via an entropy criterion. The planes are encoded via a novel adaptive quantization method combined with prediction. The input 3D point cloud is encoded as an interlaced stream of partitioning planes and number of points in the cells of the partition. Compared to previous work, the added value is an improved rate-distortion performance, especially for very low bitrates. The latter are critical for interactive navigation of large 3D point clouds on heterogeneous networked infrastructures.Item Computational Smocking through Fabric-Thread Interaction(The Eurographics Association and John Wiley & Sons Ltd., 2024) Zhou, Ningfeng; Ren, Jing; Sorkine-Hornung, Olga; Bermano, Amit H.; Kalogerakis, EvangelosWe formalize Italian smocking, an intricate embroidery technique that gathers flat fabric into pleats along meandering lines of stitches, resulting in pleats that fold and gather where the stitching veers. In contrast to English smocking, characterized by colorful stitches decorating uniformly shaped pleats, and Canadian smocking, which uses localized knots to form voluminous pleats, Italian smocking permits the fabric to move freely along the stitched threads following curved paths, resulting in complex and unpredictable pleats with highly diverse, irregular structures, achieved simply by pulling on the threads. We introduce a novel method for digital previewing of Italian smocking results, given the thread stitching path as input. Our method uses a coarse-grained mass-spring system to simulate the interaction between the threads and the fabric. This configuration guides the fine-level fabric deformation through an adaptation of the state-of-the-art simulator, C-IPC [LKJ21]. Our method models the general problem of fabric-thread interaction and can be readily adapted to preview Canadian smocking as well.We compare our results to baseline approaches and physical fabrications to demonstrate the accuracy of our method.Item Cinematographic Camera Diffusion Model(The Eurographics Association and John Wiley & Sons Ltd., 2024) Jiang, Hongda; Wang, Xi; Christie, Marc; Liu, Libin; Chen, Baoquan; Bermano, Amit H.; Kalogerakis, EvangelosDesigning effective camera trajectories in virtual 3D environments is a challenging task even for experienced animators. Despite an elaborate film grammar, forged through years of experience, that enables the specification of camera motions through cinematographic properties (framing, shots sizes, angles, motions), there are endless possibilities in deciding how to place and move cameras with characters. Dealing with these possibilities is part of the complexity of the problem. While numerous techniques have been proposed in the literature (optimization-based solving, encoding of empirical rules, learning from real examples,...), the results either lack variety or ease of control. In this paper, we propose a cinematographic camera diffusion model using a transformer-based architecture to handle temporality and exploit the stochasticity of diffusion models to generate diverse and qualitative trajectories conditioned by high-level textual descriptions. We extend the work by integrating keyframing constraints and the ability to blend naturally between motions using latent interpolation, in a way to augment the degree of control of the designers. We demonstrate the strengths of this text-to-camera motion approach through qualitative and quantitative experiments and gather feedback from professional artists. The code and data are available at https://github.com/jianghd1996/Camera-control.Item Neural Denoising for Deep-Z Monte Carlo Renderings(The Eurographics Association and John Wiley & Sons Ltd., 2024) Zhang, Xianyao; Röthlin, Gerhard; Zhu, Shilin; Aydin, Tunç Ozan; Salehi, Farnood; Gross, Markus; Papas, Marios; Bermano, Amit H.; Kalogerakis, EvangelosWe present a kernel-predicting neural denoising method for path-traced deep-Z images that facilitates their usage in animation and visual effects production. Deep-Z images provide enhanced flexibility during compositing as they contain color, opacity, and other rendered data at multiple depth-resolved bins within each pixel. However, they are subject to noise, and rendering until convergence is prohibitively expensive. The current state of the art in deep-Z denoising yields objectionable artifacts, and current neural denoising methods are incapable of handling the variable number of depth bins in deep-Z images. Our method extends kernel-predicting convolutional neural networks to address the challenges stemming from denoising deep-Z images. We propose a hybrid reconstruction architecture that combines the depth-resolved reconstruction at each bin with the flattened reconstruction at the pixel level. Moreover, we propose depth-aware neighbor indexing of the depth-resolved inputs to the convolution and denoising kernel application operators, which reduces artifacts caused by depth misalignment present in deep-Z images. We evaluate our method on a production-quality deep-Z dataset, demonstrating significant improvements in denoising quality and performance compared to the current state-of-the-art deep-Z denoiser. By addressing the significant challenge of the cost associated with rendering path-traced deep-Z images, we believe that our approach will pave the way for broader adoption of deep-Z workflows in future productions.Item Computing Manifold Next-Event Estimation without Derivatives using the Nelder-Mead Method(The Eurographics Association, 2024) Granizo-Hidalgo, Ana; Holzschuch, Nicolas; Haines, Eric; Garces, ElenaSpecular surfaces, by focusing the light that is being reflected or refracted, cause bright spots in the scene, called caustics. These caustics are challenging to compute for global illumination algorithms. Manifold-based methods (Manifold Exploration, Manifold Next-Event Estimation, Specular Next Event Estimation) compute these caustics as the zeros of an objective function, using the Newton-Raphson method. They are efficient, but require computing the derivatives of the objective function, which in turn requires local surface derivatives around the reflection point, which can be challenging to implement. In this paper, we leverage the Nelder-Mead method to compute caustics using Manifold Next-Event Estimation without having to compute local derivatives. Our method only requires local evaluations of the objective function, making it an easy addition to any path-tracing algorithm.Item Path Sampling Methods for Differentiable Rendering(The Eurographics Association, 2024) Su, Tanli; Gkioulekas, Ioannis; Haines, Eric; Garces, ElenaWe introduce a suite of path sampling methods for differentiable rendering of scene parameters that do not induce visibility-driven discontinuities, such as BRDF parameters. We begin by deriving a path integral formulation for differentiable rendering of such parameters, which we then use to derive methods that importance sample paths according to this formulation. Our methods are analogous to path tracing and path tracing with next event estimation for primal rendering, have linear complexity, and can be implemented efficiently using path replay backpropagation. Our methods readily benefit from differential BRDF sampling routines, and can be further enhanced using multiple importance sampling and a loss-aware pixel-space adaptive sampling procedure tailored to our path integral formulation. We show experimentally that our methods reduce variance in rendered gradients by potentially orders of magnitude, and thus help accelerate inverse rendering optimization of BRDF parameters.Item Does Higher Refractive Index Mean Higher Gloss?(The Eurographics Association, 2024) Gigilashvili, Davit; Diaz Estrada, David Norman; Haines, Eric; Garces, ElenaAccording to Fresnel equations, the amount of specular reflection at the dielectric surface depends on two factors: incident angle and the difference in refractive indices of inner and outer media. Therefore, it is often assumed that the higher the refractive index of the material, the glossier it looks. However, gloss perception is a complex process that, in addition to specular reflectance, depends on many other factors, such as object's translucency and shape. In this study, we conducted two psychophysical experiments to quantify the impact of refractive index on perceived gloss for objects with varying degrees of translucency and surface roughness. For some objects a monotonic positive relationship between refractive index and perceived gloss was observed, while for others the relationship was found to be non-monotonic. Afterward, we evaluated how the refractive index affects image cues to gloss and tried to explain psychophysical results by image statistics.Item Real-time Neural Rendering of Dynamic Light Fields(The Eurographics Association and John Wiley & Sons Ltd., 2024) Coomans, Arno; Dominici, Edoardo Alberto; Döring, Christian; Mueller, Joerg H.; Hladky, Jozef; Steinberger, Markus; Bermano, Amit H.; Kalogerakis, EvangelosSynthesising high-quality views of dynamic scenes via path tracing is prohibitively expensive. Although caching offline-quality global illumination in neural networks alleviates this issue, existing neural view synthesis methods are limited to mainly static scenes, have low inference performance or do not integrate well with existing rendering paradigms. We propose a novel neural method that is able to capture a dynamic light field, renders at real-time frame rates at 1920x1080 resolution and integrates seamlessly with Monte Carlo ray tracing frameworks. We demonstrate how a combination of spatial, temporal and a novel surface-space encoding are each effective at capturing different kinds of spatio-temporal signals. Together with a compact fully-fused neural network and architectural improvements, we achieve a twenty-fold increase in network inference speed compared to related methods at equal or better quality. Our approach is suitable for providing offline-quality real-time rendering in a variety of scenarios, such as free-viewpoint video, interactive multi-view rendering, or streaming rendering. Finally, our work can be integrated into other rendering paradigms, e.g., providing a dynamic background for interactive scenarios where the foreground is rendered with traditional methods.