Deep Deformation Detail Synthesis for Thin Shell Models

dc.contributor.authorChen, Lanen_US
dc.contributor.authorGao, Linen_US
dc.contributor.authorYang, Jieen_US
dc.contributor.authorXu, Shibiaoen_US
dc.contributor.authorYe, Juntaoen_US
dc.contributor.authorZhang, Xiaopengen_US
dc.contributor.authorLai, Yu-Kunen_US
dc.contributor.editorMemari, Pooranen_US
dc.contributor.editorSolomon, Justinen_US
dc.date.accessioned2023-06-30T06:18:42Z
dc.date.available2023-06-30T06:18:42Z
dc.date.issued2023
dc.description.abstractIn physics-based cloth animation, rich folds and detailed wrinkles are achieved at the cost of expensive computational resources and huge labor tuning. Data-driven techniques make efforts to reduce the computation significantly by utilizing a preprocessed database. One type of methods relies on human poses to synthesize fitted garments, but these methods cannot be applied to general cloth animations. Another type of methods adds details to the coarse meshes obtained through simulation, which does not have such restrictions. However, existing works usually utilize coordinate-based representations which cannot cope with large-scale deformation, and requires dense vertex correspondences between coarse and fine meshes. Moreover, as such methods only add details, they require coarse meshes to be sufficiently close to fine meshes, which can be either impossible, or require unrealistic constraints to be applied when generating fine meshes. To address these challenges, we develop a temporally and spatially as-consistent-as-possible deformation representation (named TS-ACAP) and design a DeformTransformer network to learn the mapping from low-resolution meshes to ones with fine details. This TS-ACAP representation is designed to ensure both spatial and temporal consistency for sequential large-scale deformations from cloth animations. With this TS-ACAP representation, our DeformTransformer network first utilizes two mesh-based encoders to extract the coarse and fine features using shared convolutional kernels, respectively. To transduct the coarse features to the fine ones, we leverage the spatial and temporal Transformer network that consists of vertex-level and frame-level attention mechanisms to ensure detail enhancement and temporal coherence of the prediction. Experimental results show that our method is able to produce reliable and realistic animations in various datasets at high frame rates with superior detail synthesis abilities compared to existing methods.en_US
dc.description.number5
dc.description.sectionheadersDetails on Surfaces
dc.description.seriesinformationComputer Graphics Forum
dc.description.volume42
dc.identifier.doi10.1111/cgf.14903
dc.identifier.issn1467-8659
dc.identifier.pages13 pages
dc.identifier.urihttps://doi.org/10.1111/cgf.14903
dc.identifier.urihttps://diglib.eg.org:443/handle/10.1111/cgf14903
dc.publisherThe Eurographics Association and John Wiley & Sons Ltd.en_US
dc.subjectCCS Concepts: Computing methodologies -> Physical simulation; Artificial intelligence
dc.subjectComputing methodologies
dc.subjectPhysical simulation
dc.subjectArtificial intelligence
dc.titleDeep Deformation Detail Synthesis for Thin Shell Modelsen_US
Files
Original bundle
Now showing 1 - 2 of 2
Loading...
Thumbnail Image
Name:
v42i5_07_14903.pdf
Size:
16.78 MB
Format:
Adobe Portable Document Format
Loading...
Thumbnail Image
Name:
supplementary.pdf
Size:
5.97 MB
Format:
Adobe Portable Document Format
Collections