PG2017shortISBN 978-3-03868-051-2https://diglib.eg.org:443/handle/10.2312/26318062024-03-28T12:14:57Z2024-03-28T12:14:57ZRobust Gas Condensation Simulation with SPH based on Heat TransferZhang, TaiyouShi, JiajunWang, ChangboQin, HongLi, Chenhttps://diglib.eg.org:443/handle/10.2312/pg201713212022-03-28T06:54:31Z2017-01-01T00:00:00ZRobust Gas Condensation Simulation with SPH based on Heat Transfer
Zhang, Taiyou; Shi, Jiajun; Wang, Changbo; Qin, Hong; Li, Chen
Jernej Barbic and Wen-Chieh Lin and Olga Sorkine-Hornung
Most simulation of natural phenomena in graphics are physically based, oftentimes involving heat transfer, phase transition, environmental constraints, and/or a combination of the above. At the numerical level, the particle-based schemes (e.g., smooth particle hydrodynamics (SPH)) have proved to preserve subtle details while accommodating large quantity of particles and enabling complex interaction during heat transition. In this paper, we propose a novel hybrid complementary framework to faithfully model intricate details in vapor condensation while circumventing disadvantages of the existing methods. The phase transition is governed by robust heat transfer and dynamic characteristic of condensation, so that the condensed drop is precisely simulated by way of the SPH model. We introduce the dew point to ensure faithful visual simulation, as the atmospheric pressure and the relative humidity were isolated from condensation. Moreover, we design a equivalent substitution for ambient impacts to correct the heat transfer across the boundary layer and reduce the quantity of air particles being utilized. To generate plausible high-resolution visual effects, we extend the standard height map with more physical control and construct arbitrary shape of surface via the reproduction on normal map. We demonstrate the advantages of our framework in several fluid scenes, including vapor condensation on a mirror and some more plausible contrasts.
2017-01-01T00:00:00ZComputing Restricted Voronoi Diagram on Graphics HardwareHan, JiaweiYan, DongmingWang, LiliZhao, Qinpinghttps://diglib.eg.org:443/handle/10.2312/pg201713202022-03-28T06:54:21Z2017-01-01T00:00:00ZComputing Restricted Voronoi Diagram on Graphics Hardware
Han, Jiawei; Yan, Dongming; Wang, Lili; Zhao, Qinping
Jernej Barbic and Wen-Chieh Lin and Olga Sorkine-Hornung
The 3D restricted Voronoi diagram (RVD), defined as the intersection of the 3D Voronoi diagram of a pointset with a mesh surface, has many applications in geometry processing. There exist several CPU algorithms for computing RVDs. However, such algorithms still cannot compute RVDs in realtime. In this short paper, we propose an efficient algorithm for computing RVDs on graphics hardware. We demonstrate the robustness and the efficiency of the proposed GPU algorithm by applying it to surface remeshing based on centroidal Voronoi tessellation.
2017-01-01T00:00:00ZRobust Edge-Preserved Surface Mesh Polycube DeformationZhao, HuiLei, NaLi, XuanZeng, PengXu, KeGu, Xianfenghttps://diglib.eg.org:443/handle/10.2312/pg201713192022-03-28T06:54:28Z2017-01-01T00:00:00ZRobust Edge-Preserved Surface Mesh Polycube Deformation
Zhao, Hui; Lei, Na; Li, Xuan; Zeng, Peng; Xu, Ke; Gu, Xianfeng
Jernej Barbic and Wen-Chieh Lin and Olga Sorkine-Hornung
The problem of polycube construction or deformation is an essential problem in computer graphics. In this paper, we present a robust, simple, efficient and automatic algorithm to deform the meshes of arbitrary shapes into their polycube ones. We derive a clear relationship between a mesh and its corresponding polycube shape. Our algorithm is edge-preserved, and works on surface meshes with or without boundaries. Our algorithm outperforms previous ones in speed, robustness, efficiency. Our method is simple to implement. To demonstrate the robustness and effectiveness of our method, we apply it to hundreds of models of varying complexity and topology. We demonstrate that our method compares favorably to other state-of-the-art polycube deformation methods.
2017-01-01T00:00:00ZTransferring Pose and Augmenting Background Variation for Deep Human Image ParsingKikuchi, TakazumiEndo, YukiKanamori, YoshihiroHashimoto, TaisukeMitani, Junhttps://diglib.eg.org:443/handle/10.2312/pg201713172022-03-28T06:54:30Z2017-01-01T00:00:00ZTransferring Pose and Augmenting Background Variation for Deep Human Image Parsing
Kikuchi, Takazumi; Endo, Yuki; Kanamori, Yoshihiro; Hashimoto, Taisuke; Mitani, Jun
Jernej Barbic and Wen-Chieh Lin and Olga Sorkine-Hornung
Human parsing is a fundamental task to estimate semantic parts in a human image such as face, arm, leg, hat, and dress. Recent deep-learning based methods have achieved significant improvements, but collecting training datasets of pixel-wise annotations is labor-intensive. In this paper, we propose two solutions to cope with limited dataset. First, to handle various poses, we incorporate a pose estimation network into an end-to-end human parsing network in order to transfer common features across the domains. The pose estimation network can be trained using rich datasets and feed valuable features to the human parsing network. Second, to handle complicated backgrounds, we increase the variations of background images automatically by replacing the original backgrounds of human images with those obtained from large-scale scenery image datasets. While each of the two solutions is versatile and beneficial to human parsing, their combination yields further improvement.
2017-01-01T00:00:00Z