Search Results

Now showing 1 - 10 of 21
  • Item
    Real-time Seamless Object Space Shading
    (The Eurographics Association, 2024) Li, Tianyu; Guo, Xiaoxin; Hu, Ruizhen; Charalambous, Panayiotis
    Object space shading remains a challenging problem in real-time rendering due to runtime overhead and object parameterization limitations. While the recently developed algorithm by Baker et al. [BJ22] enables high-performance real-time object space shading, it still suffers from seam artifacts. In this paper, we introduce an innovative object space shading system leveraging a virtualized per-halfedge texturing schema to obviate excessive shading and preclude texture seam artifacts. Moreover, we implement ReSTIR GI on our system (see Figure 1), removing the necessity of temporally reprojecting shading samples and improving the convergence of areas of disocclusion. Our system yields superior results in terms of both efficiency and visual fidelity.
  • Item
    DeepIron: Predicting Unwarped Garment Texture from a Single Image
    (The Eurographics Association, 2024) Kwon, Hyun-Song; Lee, Sung-Hee; Hu, Ruizhen; Charalambous, Panayiotis
    Realistic reconstruction of 3D clothing from an image has wide applications, such as avatar creation and virtual try-on. This paper presents a novel framework that reconstructs the texture map for 3D garments from a single garment image with pose. Since 3D garments are effectively modeled by stitching 2D garment sewing patterns, our specific goal is to generate a texture image for the sewing patterns. A key component of our framework, the Texture Unwarper, infers the original texture image from the input garment image, which exhibits warping and occlusion of the garment due to the user's body shape and pose. This is effectively achieved by translating between the input and output images by mapping the latent spaces of the two images. By inferring the unwarped original texture of the input garment, our method helps reconstruct 3D garment models that can show high-quality texture images realistically deformed for new poses. We validate the effectiveness of our approach through a comparison with other methods and ablation studies.
  • Item
    Fast Dynamic Facial Wrinkles
    (The Eurographics Association, 2024) Weiss, Sebastian; Chandran, Prashanth; Zoss, Gaspard; Bradley, Derek; Hu, Ruizhen; Charalambous, Panayiotis
    We present a new method to animate the dynamic motion of skin micro wrinkles under facial expression deformation. Since wrinkles are formed as a reservoir of skin for stretching, our model only deforms wrinkles that are perpendicular to the stress axis. Specifically, those wrinkles become wider and shallower when stretched, and deeper and narrower when compressed. In contrast to previous methods that attempted to modify the neutral wrinkle displacement map, our approach is to modify the way wrinkles are constructed in the displacement map. To this end, we build upon a previous synthetic wrinkle generator that allows us to control the width and depth of individual wrinkles when generated on a per-frame basis. Furthermore, since constructing a displacement map per frame of animation is costly, we present a fast approximation approach using pre-computed displacement maps of wrinkles binned by stretch direction, which can be blended interactively in a shader. We compare both our high quality and fast methods with previous techniques for wrinkle animation and demonstrate that our work retains more realistic details.
  • Item
    Utilizing Motion Matching with Deep Reinforcement Learning for Target Location Tasks
    (The Eurographics Association, 2024) Lee, Jeongmin; Kwon, Taesoo; Shin, Hyunju; Lee, Yoonsang; Hu, Ruizhen; Charalambous, Panayiotis
    We present an approach using deep reinforcement learning (DRL) to directly generate motion matching queries for longterm tasks, particularly targeting the reaching of specific locations. By integrating motion matching and DRL, our method demonstrates the rapid learning of policies for target location tasks within minutes on a standard desktop, employing a simple reward design. Additionally, we propose a unique hit reward and obstacle curriculum scheme to enhance policy learning in environments with moving obstacles.
  • Item
    FACTS: Facial Animation Creation using the Transfer of Styles
    (The Eurographics Association, 2024) Saunders, Jack R.; Namboodiri, Vinay P.; Hu, Ruizhen; Charalambous, Panayiotis
    The ability to accurately capture and express emotions is a critical aspect of creating believable characters in video games and other forms of entertainment. Traditionally, this animation has been achieved with artistic effort or performance capture, both requiring costs in time and labor. More recently, audio-driven models have seen success, however, these often lack expressiveness in areas not correlated to the audio signal. In this paper, we present a novel approach to facial animation by taking existing animations and allowing for the modification of style characteristics. We maintain the lip-sync of the animations with this method thanks to the use of a novel viseme-preserving loss. We perform quantitative and qualitative experiments to demonstrate the effectiveness of our work.
  • Item
    StarDEM: Efficient Discrete Element Method for Star-shaped Particles
    (The Eurographics Association, 2024) Schreck, Camille; Lefebvre, Sylvain; Jourdan, David; Martínez, Jonàs; Hu, Ruizhen; Charalambous, Panayiotis
    Granular materials composed of particles with complex shapes are challenging to simulate due to the high number of collisions between the particles. In this context, star shapes are promising: they cover a wide range of geometries from convex to concave and have interesting geometric properties. We propose an efficient method to simulate a large number of identical star-shaped particles. Our method relies on an effective approximation of the contacts between particles that can handle complex shapes, including highly non-convex ones. We demonstrate our method by implementing it in a 2D simulation using the Discrete Element Method, both on the CPU and GPU.
  • Item
    Emotional Responses to Exclusionary Behaviors in Intelligent Embodied Augmented Reality Agents
    (The Eurographics Association, 2024) Apostolou, Kalliopi; Milata, Vaclav; Škola, Filip; Liarokapis, Fotis; Hu, Ruizhen; Charalambous, Panayiotis
    This study investigated how interactions with intelligent agents, embodied as augmented reality (AR) avatars displaying exclusionary behaviors, affect users' emotions. Six participants engaged using voice interaction in a knowledge acquisition scenario in an AR environment with two ChatGPT-driven agents. The gaze-aware avatars, simulating realistic body language, progressively demonstrated social exclusion behaviors. Although not statistically significant, our data suggest a post-interaction emotional shift, manifested by decreased positive and negative affect-aligning with previous studies on social exclusion. Qualitative feedback revealed that some users attributed the exclusionary behavior of avatars to system glitches, leading to their disengagement. Our findings highlight challenges and opportunities for embodied intelligent agents, underscoring their potential to shape user experiences within AR, and the broader extended reality (XR) landscape.
  • Item
    Modern Dance Retargeting using Ribbons as Lines of Action
    (The Eurographics Association, 2024) Vialle, Manon; Ronfard, Rémi; Skouras, Melina; Hu, Ruizhen; Charalambous, Panayiotis
    We present a method for retargetting dancing characters represented as articulated skeletons with possibly different morphologies and topologies. Our approach relies on the use of flexible ribbons that can bend and twist as an intermediate representation, and that can be seen as animated lines of action. These ribbons allow us to abstract away the specific morphology of the bodies and to well transmit the fluidity of modern dance movement from one character to another.
  • Item
    EUROGRAPHICS 2024: Short Papers Frontmatter
    (Eurographics Association, 2024) Hu, Ruizhen; Charalambous, Panayiotis; Hu, Ruizhen; Charalambous, Panayiotis
  • Item
    Neural Moment Transparency
    (The Eurographics Association, 2024) Tsopouridis, Grigoris; Vasilakis, Andreas Alexandros; Fudos, Ioannis; Hu, Ruizhen; Charalambous, Panayiotis
    We have developed a machine learning approach to efficiently compute per-fragment transmittance, using transmittance composed and accumulated with moment statistics, on a fragment shader. Our approach excels in achieving superior visual accuracy for computing order-independent transparency (OIT) in scenes with high depth complexity when compared to prior art.