Search Results

Now showing 1 - 7 of 7
  • Item
    EUROGRAPHICS 2023: CGF 42-2 Frontmatter
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Myszkowski, Karol; Niessner, Matthias; Myszkowski, Karol; Niessner, Matthias
  • Item
    Neural Acceleration of Scattering-Aware Color 3D Printing
    (The Eurographics Association and John Wiley & Sons Ltd., 2021) Rittig, Tobias; Sumin, Denis; Babaei, Vahid; Didyk, Piotr; Voloboy, Alexey; Wilkie, Alexander; Bickel, Bernd; Myszkowski, Karol; Weyrich, Tim; Krivánek, Jaroslav; Mitra, Niloy and Viola, Ivan
    With the wider availability of full-color 3D printers, color-accurate 3D-print preparation has received increased attention. A key challenge lies in the inherent translucency of commonly used print materials that blurs out details of the color texture. Previous work tries to compensate for these scattering effects through strategic assignment of colored primary materials to printer voxels. To date, the highest-quality approach uses iterative optimization that relies on computationally expensive Monte Carlo light transport simulation to predict the surface appearance from subsurface scattering within a given print material distribution; that optimization, however, takes in the order of days on a single machine. In our work, we dramatically speed up the process by replacing the light transport simulation with a data-driven approach. Leveraging a deep neural network to predict the scattering within a highly heterogeneous medium, our method performs around two orders of magnitude faster than Monte Carlo rendering while yielding optimization results of similar quality level. The network is based on an established method from atmospheric cloud rendering, adapted to our domain and extended by a physically motivated weight sharing scheme that substantially reduces the network size. We analyze its performance in an end-to-end print preparation pipeline and compare quality and runtime to alternative approaches, and demonstrate its generalization to unseen geometry and material values. This for the first time enables full heterogenous material optimization for 3D-print preparation within time frames in the order of the actual printing time.
  • Item
    Cinematic Gaussians: Real-Time HDR Radiance Fields with Depth of Field
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Wang, Chao; Wolski, Krzysztof; Kerbl, Bernhard; Serrano, Ana; Bemama, Mojtaba; Seidel, Hans-Peter; Myszkowski, Karol; Leimkühler, Thomas; Chen, Renjie; Ritschel, Tobias; Whiting, Emily
    Radiance field methods represent the state of the art in reconstructing complex scenes from multi-view photos. However, these reconstructions often suffer from one or both of the following limitations: First, they typically represent scenes in low dynamic range (LDR), which restricts their use to evenly lit environments and hinders immersive viewing experiences. Secondly, their reliance on a pinhole camera model, assuming all scene elements are in focus in the input images, presents practical challenges and complicates refocusing during novel-view synthesis. Addressing these limitations, we present a lightweight method based on 3D Gaussian Splatting that utilizes multi-view LDR images of a scene with varying exposure times, apertures, and focus distances as input to reconstruct a high-dynamic-range (HDR) radiance field. By incorporating analytical convolutions of Gaussians based on a thin-lens camera model as well as a tonemapping module, our reconstructions enable the rendering of HDR content with flexible refocusing capabilities. We demonstrate that our combined treatment of HDR and depth of field facilitates real-time cinematic rendering, outperforming the state of the art.
  • Item
    Video Frame Interpolation for High Dynamic Range Sequences Captured with Dual-exposure Sensors
    (The Eurographics Association and John Wiley & Sons Ltd., 2023) Cogalan, Ugur; Bemana, Mojtaba; Seidel, Hans-Peter; Myszkowski, Karol; Myszkowski, Karol; Niessner, Matthias
    Video frame interpolation (VFI) enables many important applications such as slow motion playback and frame rate conversion. However, one major challenge in using VFI is accurately handling high dynamic range (HDR) scenes with complex motion. To this end, we explore the possible advantages of dual-exposure sensors that readily provide sharp short and blurry long exposures that are spatially registered and whose ends are temporally aligned. This way, motion blur registers temporally continuous information on the scene motion that, combined with the sharp reference, enables more precise motion sampling within a single camera shot. We demonstrate that this facilitates a more complex motion reconstruction in the VFI task, as well as HDR frame reconstruction that so far has been considered only for the originally captured frames, not in-between interpolated frames. We design a neural network trained in these tasks that clearly outperforms existing solutions. We also propose a metric for scene motion complexity that provides important insights into the performance of VFI methods at test time.
  • Item
    Enhancing Image Quality Prediction with Self-supervised Visual Masking
    (The Eurographics Association and John Wiley & Sons Ltd., 2024) Çogalan, Ugur; Bemana, Mojtaba; Seidel, Hans-Peter; Myszkowski, Karol; Bermano, Amit H.; Kalogerakis, Evangelos
    Full-reference image quality metrics (FR-IQMs) aim to measure the visual differences between a pair of reference and distorted images, with the goal of accurately predicting human judgments. However, existing FR-IQMs, including traditional ones like PSNR and SSIM and even perceptual ones such as HDR-VDP, LPIPS, and DISTS, still fall short in capturing the complexities and nuances of human perception. In this work, rather than devising a novel IQM model, we seek to improve upon the perceptual quality of existing FR-IQM methods. We achieve this by considering visual masking, an important characteristic of the human visual system that changes its sensitivity to distortions as a function of local image content. Specifically, for a given FR-IQM metric, we propose to predict a visual masking model that modulates reference and distorted images in a way that penalizes the visual errors based on their visibility. Since the ground truth visual masks are difficult to obtain, we demonstrate how they can be derived in a self-supervised manner solely based on mean opinion scores (MOS) collected from an FR-IQM dataset. Our approach results in enhanced FR-IQM metrics that are more in line with human prediction both visually and quantitatively.
  • Item
    Learning a Self-supervised Tone Mapping Operator via Feature Contrast Masking Loss
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Wang, Chao; Chen, Bin; Seidel, Hans-Peter; Myszkowski, Karol; Serrano, Ana; Chaine, Raphaëlle; Kim, Min H.
    High Dynamic Range (HDR) content is becoming ubiquitous due to the rapid development of capture technologies. Nevertheless, the dynamic range of common display devices is still limited, therefore tone mapping (TM) remains a key challenge for image visualization. Recent work has demonstrated that neural networks can achieve remarkable performance in this task when compared to traditional methods, however, the quality of the results of these learning-based methods is limited by the training data. Most existing works use as training set a curated selection of best-performing results from existing traditional tone mapping operators (often guided by a quality metric), therefore, the quality of newly generated results is fundamentally limited by the performance of such operators. This quality might be even further limited by the pool of HDR content that is used for training. In this work we propose a learning-based self-supervised tone mapping operator that is trained at test time specifically for each HDR image and does not need any data labeling. The key novelty of our approach is a carefully designed loss function built upon fundamental knowledge on contrast perception that allows for directly comparing the content in the HDR and tone mapped images. We achieve this goal by reformulating classic VGG feature maps into feature contrast maps that normalize local feature differences by their average magnitude in a local neighborhood, allowing our loss to account for contrast masking effects. We perform extensive ablation studies and exploration of parameters and demonstrate that our solution outperforms existing approaches with a single set of fixed parameters, as confirmed by both objective and subjective metrics.
  • Item
    Bracket Diffusion: HDR Image Generation by Consistent LDR Denoising
    (The Eurographics Association and John Wiley & Sons Ltd., 2025) Bemana, Mojtaba; Leimkühler, Thomas; Myszkowski, Karol; Seidel, Hans-Peter; Ritschel, Tobias; Bousseau, Adrien; Day, Angela
    We demonstrate generating HDR images using the concerted action of multiple black-box, pre-trained LDR image diffusion models. Common diffusion models are not HDR as, first, there is no sufficiently large HDR image dataset available to re-train them, and, second, even if it was, re-training such models is impossible for most compute budgets. Instead, we seek inspiration from the HDR image capture literature that traditionally fuses sets of LDR images, called ''exposure brackets'', to produce a single HDR image. We operate multiple denoising processes to generate multiple LDR brackets that together form a valid HDR result. To this end, we introduce a brackets consistency term into the diffusion process to couple the brackets such that they agree across the exposure range they share. We demonstrate HDR versions of state-of-the-art unconditional and conditional as well as restoration-type (LDR2HDR) generative modeling.