Search Results

Now showing 1 - 5 of 5
  • Item
    Efficient and Stable Simulation of Inextensible Cosserat Rods by a Compact Representation
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Zhao, Chongyao; Lin, Jinkeng; Wang, Tianyu; Bao, Hujun; Huang, Jin; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    Piecewise linear inextensible Cosserat rods are usually represented by Cartesian coordinates of vertices and quaternions on the segments. Such representations use excessive degrees of freedom (DOFs), and need many additional constraints, which causes unnecessary numerical difficulties and computational burden for simulation. We propose a simple yet compact representation that exactly matches the intrinsic DOFs and naturally satisfies all such constraints. Specifically, viewing a rod as a chain of rigid segments, we encode its shape as the Cartesian coordinates of its root vertex, and use axis-angle representation for the material frame on each segment. Under our representation, the Hessian of the implicit time-stepping has special non-zero patterns. Exploiting such specialties, we can solve the associated linear equations in nearly linear complexity. Furthermore, we carefully designed a preconditioner, which is proved to be always symmetric positive-definite and accelerates the PCG solver in one or two orders of magnitude compared with the widely used block-diagonal one. Compared with other technical choices including Super-Helices, a specially designed compact representation for inextensible Cosserat rods, our method achieves better performance and stability, and can simulate an inextensible Cosserat rod with hundreds of vertices and tens of collisions in real time under relatively large time steps.
  • Item
    MINERVAS: Massive INterior EnviRonments VirtuAl Synthesis
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Ren, Haocheng; Zhang, Hao; Zheng, Jia; Zheng, Jiaxiang; Tang, Rui; Huo, Yuchi; Bao, Hujun; Wang, Rui; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    With the rapid development of data-driven techniques, data has played an essential role in various computer vision tasks. Many realistic and synthetic datasets have been proposed to address different problems. However, there are lots of unresolved challenges: (1) the creation of dataset is usually a tedious process with manual annotations, (2) most datasets are only designed for a single specific task, (3) the modification or randomization of the 3D scene is difficult, and (4) the release of commercial 3D data may encounter copyright issue. This paper presents MINERVAS, a Massive INterior EnviRonments VirtuAl Synthesis system, to facilitate the 3D scene modification and the 2D image synthesis for various vision tasks. In particular, we design a programmable pipeline with Domain-Specific Language, allowing users to select scenes from the commercial indoor scene database, synthesize scenes for different tasks with customized rules, and render various types of imagery data, such as color images, geometric structures, semantic labels. Our system eases the difficulty of customizing massive scenes for different tasks and relieves users from manipulating fine-grained scene configurations by providing user-controllable randomness using multilevel samplers. Most importantly, it empowers users to access commercial scene databases with millions of indoor scenes and protects the copyright of core data assets, e.g., 3D CAD models. We demonstrate the validity and flexibility of our system by using our synthesized data to improve the performance on different kinds of computer vision tasks. The project page is at https://coohom.github.io/MINERVAS.
  • Item
    Economic Upper Bound Estimation in Hausdorff Distance Computation for Triangle Meshes
    (© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd, 2022) Zheng, Yicun; Sun, Haoran; Liu, Xinguo; Bao, Hujun; Huang, Jin; Hauser, Helwig and Alliez, Pierre
    The Hausdorff distance is one of the most fundamental metrics for comparing 3D shapes. To compute the Hausdorff distance efficiently from a triangular mesh to another triangular mesh , one needs to cull the unnecessary triangles on quickly. These triangles have no chance to improve the Hausdorff distance estimation, that is the parts with local upper bound smaller than the global lower bound. The local upper bound estimation should be tight, use fast distance computation, and involve a small number of triangles in during the reduction phase for efficiency. In this paper, we propose to use point‐triangle distance, and only involve at most four triangles in in the reduction phase. Comparing with the state‐of‐the‐art proposed by Tang et al. in 2009, which uses more costly triangle‐triangle distance and may involve a large number of triangles in reduction phase, our local upper bound estimation is faster, and with only a small impact on the tightness of the bound on error estimation. Such a more economic strategy boosts the overall performance significantly. Experiments on the Thingi10K dataset show that our method can achieve several (even over 20) times speedup on average. On a few models with different placements and resolutions, we show that close placement and large difference in resolution bring big challenges to Hausdorff distance computation, and explain why our method can achieve more significant speedup on challenging cases.
  • Item
    Efficient Texture Parameterization Driven by Perceptual-Loss-on-Screen
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Sun, Haoran; Wang, Shiyi; Wu, Wenhai; Jin, Yao; Bao, Hujun; Huang, Jin; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    Texture mapping is a ubiquitous technique to enrich the visual effect of a mesh, which represents the desired signal (e.g. diffuse color) on the mesh to a texture image discretized by pixels through a bijective parameterization. To achieve high visual quality, large number of pixels are generally required, which brings big burden in storage, memory and transmission. We propose to use a perceptual model and a rendering procedure to measure the loss coming from the discretization, then optimize a parameterization to improve the efficiency, i.e. using fewer pixels under a comparable perceptual loss. The general perceptual model and rendering procedure can be very complicated, and non-isotropic property rooted in the square shape of pixels make the problem more difficult to solve. We adopt a two-stage strategy and use the Bayesian optimization in the triangle-wise stage. With our carefully designed weighting scheme, the mesh-wise optimization can take the triangle-wise perceptual loss into consideration under a global conforming requirement. Comparing with many parameterizations manually designed, driven by interpolation error, or driven by isotropic energy, ours can use significantly fewer pixels with comparable perception loss or vise vesa.
  • Item
    Multirate Shading with Piecewise Interpolatory Approximation
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Hu, Yiwei; Yuan, Yazhen; Wang, Rui; Yang, Zhuo; Bao, Hujun; Umetani, Nobuyuki; Wojtan, Chris; Vouga, Etienne
    Evaluating shading functions on geometry surfaces dominates the rendering computation. A high-quality but time-consuming estimate is usually achieved with a dense sampling rate for pixels or sub-pixels. In this paper, we leverage sparsely sampled points on vertices of dynamically-generated subdivision surfaces to approximate the ground-truth shading signal by piecewise linear reconstruction. To control the introduced interpolation error at runtime, we analytically derive an L∞ error bound and compute the optimal subdivision surfaces based on a user-specified error threshold. We apply our analysis on multiple shading functions including Lambertian, Blinn-Phong, Microfacet BRDF and also extend it to handle textures, yielding easy-to-compute formulas. To validate our derivation, we design a forward multirate shading algorithm powered by hardware tessellator that moves shading computation at pixels to the vertices of subdivision triangles on the fly. We show our approach significantly reduces the sampling rates on various test cases, reaching a speedup ratio of 134% ∼ 283% compared to dense per-pixel shading in current graphics hardware.