6 results
Search Results
Now showing 1 - 6 of 6
Item A Generative Adversarial Network for Upsampling of Direct Volume Rendering Images(Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Jin, Ge; Jung, Younhyun; Fulham, Michael; Feng, Dagan; Kim, JinmanDirect volume rendering (DVR) is an important tool for scientific and medical imaging visualization. Modern GPU acceleration has made DVR more accessible; however, the production of high‐quality rendered images with high frame rates is computationally expensive. We propose a deep learning method with a reduced computational demand. We leveraged a conditional generative adversarial network (cGAN) to upsample DVR images (a rendered scene), with a reduced sampling rate to obtain similar visual quality to that of a fully sampled method. Our dvrGAN is combined with a colour‐based loss function that is optimized for DVR images where different structures such as skin, bone, . are distinguished by assigning them distinct colours. The loss function highlights the structural differences between images, by examining pixel‐level colour, and thus helps identify, for instance, small bones in the limbs that may not be evident with reduced sampling rates. We evaluated our method in DVR of human computed tomography (CT) and CT angiography (CTA) volumes. Our method retained image quality and reduced computation time when compared to fully sampled methods and outperformed existing state‐of‐the‐art upsampling methods.Item MoNeRF: Deformable Neural Rendering for Talking Heads via Latent Motion Navigation(Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Li, X.; Ding, Y.; Li, R.; Tang, Z.; Li, K.Novel view synthesis for talking heads presents significant challenges due to the complex and diverse motion transformations involved. Conventional methods often resort to reliance on structure priors, like facial templates, to warp observed images into a canonical space conducive to rendering. However, the incorporation of such priors introduces a trade‐off‐while aiding in synthesis, they concurrently amplify model complexity, limiting generalizability to other deformable scenes. Departing from this paradigm, we introduce a pioneering solution: the motion‐conditioned neural radiance field, MoNeRF, designed to model talking heads through latent motion navigation. At the core of MoNeRF lies a novel approach utilizing a compact set of latent codes to represent orthogonal motion directions. This innovative strategy empowers MoNeRF to efficiently capture and depict intricate scene motion by linearly combining these latent codes. In an extended capability, MoNeRF facilitates motion control through latent code adjustments, supports view transfer based on reference videos, and seamlessly extends its applicability to model human bodies without necessitating structural modifications. Rigorous quantitative and qualitative experiments unequivocally demonstrate MoNeRF's superior performance compared to state‐of‐the‐art methods in talking head synthesis. We will release the source code upon publication.Item Efficient Environment Map Rendering Based on Decomposition(Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Wu, Yu‐TingThis paper presents an efficient environment map sampling algorithm designed to render high‐quality, low‐noise images with only a few light samples, making it ideal for real‐time applications. We observe that bright pixels in the environment map produce high‐frequency shading effects, such as sharp shadows and shading, while the rest influence the overall tone of the scene. Building on this insight, our approach differs from existing techniques by categorizing the pixels in an environment map into emissive and non‐emissive regions and developing specialized algorithms tailored to the distinct properties of each region. By decomposing the environment lighting, we ensure that light sources are deposited on bright pixels, leading to more accurate shadows and specular highlights. Additionally, this strategy allows us to exploit the smoothness in the low‐frequency component by rendering a smaller image with more lights, thereby enhancing shading accuracy. Extensive experiments demonstrate that our method significantly reduces shadow artefacts and image noise compared to previous techniques, while also achieving lower numerical errors across a range of illumination types, particularly under limited sample conditions.Item Automatic Inbetweening for Stroke‐Based Painterly Animation(Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Barroso, Nicolas; Fondevilla, Amélie; Vanderhaeghe, DavidPainterly 2D animation, like the paint‐on‐glass technique, is a tedious task performed by skilled artists, primarily using traditional manual methods. Although CG tools can simplify the creation process, previous works often focus on temporal coherence, which typically results in the loss of the handmade look and feel. In contrast to cartoon animation, where regions are typically filled with smooth gradients, stroke‐based stylized 2D animation requires careful consideration of how shapes are filled, as each stroke may be perceived individually. We propose a method to generate intermediate frames using example keyframes and a motion description. This method allows artists to create only one image for every five to 10 output images in the animation, while the automatically generated intermediate frames provide plausible inbetween frames.Item Dynamic Voxel‐Based Global Illumination(Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Cosin Ayerbe, Alejandro; Poulin, Pierre; Patow, GustavoGlobal illumination computation in real time has been an objective for Computer Graphics since its inception. Unfortunately, its implementation has challenged up to now the most advanced hardware and software solutions. We propose a real‐time voxel‐based global illumination solution for a single light bounce that handles static and dynamic objects with diffuse materials under a dynamic light source. The combination of ray tracing and voxelization on the GPU offers scalability and performance. Our divide‐and‐win approach, which ray traces separately static and dynamic objects, reduces the re‐computation load with updates of any number of dynamic objects. Our results demonstrate the effectiveness of our approach, allowing the real‐time display of global illumination effects, including colour bleeding and indirect shadows, for complex scenes containing millions of polygons.Item Generalized Lipschitz Tracing of Implicit Surfaces(Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Bán, Róbert; Valasek, GáborWe present a versatile and robust framework to render implicit surfaces defined by black‐box functions that only provide function value queries. We assume that the input function is locally Lipschitz continuous; however, we presume no prior knowledge of its Lipschitz constants. Our pre‐processing step generates a discrete acceleration structure, a Lipschitz field, that provides data to infer local and directional Lipschitz upper bounds. These bounds are used to compute safe step sizes along rays during rendering. The Lipschitz field is constructed by generating local polynomial approximations to the input function, then bounding the derivatives of the approximating polynomials. The accuracy of the approximation is controlled by the polynomial degree and the granularity of the spatial resolution used during fitting, which is independent from the resolution of the Lipschitz field. We demonstrate that our process can be implemented in a massively parallel way, enabling straightforward integration into interactive and real‐time modelling workflows. Since the construction only requires function value evaluations, the input surface may be represented either procedurally or as an arbitrarily filtered grid of function samples. We query the original implicit representation upon ray trace, as such, we preserve the geometric and topological details of the input as long as the Lipschitz field supplies conservative estimates. We demonstrate our method on both procedural and discrete implicit surfaces and compare its exact and approximate variants.