PG: Pacific Graphics Conference Papers (Short Papers, Posters, Demos etc.)
Permanent URI for this community
Browse
Browsing PG: Pacific Graphics Conference Papers (Short Papers, Posters, Demos etc.) by Subject "Applied computing → Fine arts"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Neural Shadow Art(The Eurographics Association, 2025) Wang, Caoliwen; Deng, Bailin; Zhang, Juyong; Christie, Marc; Han, Ping-Hsuan; Lin, Shih-Syun; Pietroni, Nico; Schneider, Teseo; Tsai, Hsin-Ruey; Wang, Yu-Shuen; Zhang, EugeneShadow art is a captivating form of sculptural expression where the projection of a sculpture in a specific direction reveals a desired shape with high precision. In this work, we introduce Neural Shadow Art, which leverages implicit occupancy function representation to significantly expand the possibilities of shadow art. This representation enables the design of high-quality, 3D-printable geometric models with arbitrary topologies at any resolution, surpassing previous voxel- and mesh-based methods. Our method provides a more flexible framework, enabling projections to match input binary images under various light directions and screen orientations, without requiring light sources to be perpendicular to the screens. Furthermore, we allow rigid transformations of the projected geometries relative to the input binary images and simultaneously optimize light directions and screen orientations to ensure that the projections closely resemble the target images, especially when dealing with inputs of complex topologies. In addition, our model promotes surface smoothness and reduces material usage. This is particularly advantageous for efficient industrial production and enhanced artistic effect by generating compelling shadow art that avoids trivial, intersecting cylindrical structures. In summary, we propose a more flexible representation for shadow art, significantly improving projection accuracy while simultaneously meeting industrial requirements and delivering awe-inspiring artistic effects.Item Trajectory-guided Anime Video Synthesis via Effective Motion Learning(The Eurographics Association, 2025) Lin, Jian; Li, Chengze; Qin, Haoyun; Liu, Hanyuan; Liu, Xueting; Ma, Xin; Chen, Cunjian; Wong, Tien-Tsin; Christie, Marc; Han, Ping-Hsuan; Lin, Shih-Syun; Pietroni, Nico; Schneider, Teseo; Tsai, Hsin-Ruey; Wang, Yu-Shuen; Zhang, EugeneCartoon and anime motion production is traditionally labor-intensive, requiring detailed animatics and extensive inbetweening from keyframes. To streamline this process, we propose a novel framework that synthesizes motion directly from a single colored keyframe, guided by user-provided trajectories. Addressing the limitations of prior methods, which struggle with anime due to reliance on optical flow estimators and models trained on natural videos, we introduce an efficient motion representation specifically adapted for anime, leveraging CoTracker to capture sparse frame-to-frame tracking effectively. To achieve our objective, we design a two-stage learning mechanism: the first stage predicts sparse motion from input frames and trajectories, generating a motion preview sequence via explicit warping; the second stage refines these previews into high-quality anime frames by fine-tuning ToonCrafter, an anime-specific video diffusion model. We train our framework on a novel animation video dataset comprising more than 500,000 clips. Experimental results demonstrate significant improvements in animating still frames, achieving better alignment with user-provided trajectories and more natural motion patterns while preserving anime stylization and visual quality. Our method also supports versatile applications, including motion manga generation and 2D vector graphic animations. The data and code will be released upon acceptance. For models, datasets and additional visual comparisons and ablation studies, visit our project page: https://animemotiontraj.github.io/.