Search Results

Now showing 1 - 3 of 3
  • Item
    Convolution Shadow Maps
    (The Eurographics Association, 2007) Annen, Thomas; Mertens, Tom; Bekaert, Philippe; Seidel, Hans-Peter; Kautz, Jan; Jan Kautz and Sumanta Pattanaik
    We present Convolution Shadow Maps, a novel shadow representation that affords efficient arbitrary linear filtering of shadows. Traditional shadow mapping is inherently non-linear w.r.t. the stored depth values, due to the binary shadow test. We linearize the problem by approximating shadow test as a weighted summation of basis terms. We demonstrate the usefulness of this representation, and show that hardware-accelerated anti-aliasing techniques, such as tri-linear filtering, can be applied naturally to Convolution Shadow Maps. Our approach can be implemented very efficiently in current generation graphics hardware, and offers real-time frame rates.
  • Item
    Texture Transfer Using Geometry Correlation
    (The Eurographics Association, 2006) Mertens, Tom; Kautz, Jan; Chen, Jiawen; Bekaert, Philippe; Durand, Frédo; Tomas Akenine-Moeller and Wolfgang Heidrich
    Texture variation on real-world objects often correlates with underlying geometric characteristics and creates a visually rich appearance. We present a technique to transfer such geometry-dependent texture variation from an example textured model to new geometry in a visually consistent way. It captures the correlation between a set of geometric features, such as curvature, and the observed diffuse texture. We perform dimensionality reduction on the overcomplete feature set which yields a compact guidance field that is used to drive a spatially varying texture synthesis model. In addition, we introduce a method to enrich the guidance field when the target geometry strongly differs from the example. Our method transfers elaborate texture variation that follows geometric features, which gives 3D models a compelling photorealistic appearance.
  • Item
    Real-time Neural Rendering of LiDAR Point Clouds
    (The Eurographics Association, 2025) VANHERCK, Joni; Zoomers, Brent; Mertens, Tom; Jorissen, Lode; Michiels, Nick; Ceylan, Duygu; Li, Tzu-Mao
    Static LiDAR scanners produce accurate, dense, colored point clouds, but often contain obtrusive artifacts which makes them ill-suited for direct display. We propose an efficient method to render more perceptually realistic images of such scans without any expensive preprocessing or training of a scene-specific model. A naive projection of the point cloud to the output view using 1×1 pixels is fast and retains the available detail, but also results in unintelligible renderings as background points leak between the foreground pixels. The key insight is that these projections can be transformed into a more realistic result using a deep convolutional model in the form of a U-Net, and a depth-based heuristic that prefilters the data. The U-Net also handles LiDAR-specific problems such as missing parts due to occlusion, color inconsistencies and varying point densities. We also describe a method to generate synthetic training data to deal with imperfectly-aligned ground truth images. Our method achieves real-time rendering rates using an off-the-shelf GPU and outperforms the state-of-the-art in both speed and quality.