Search Results

Now showing 1 - 3 of 3
  • Item
    Digital Reunification of the Parthenon and its Sculptures
    (The Eurographics Association, 2003) Stumpfel, Jessi; Tchou, Christopher; Yun, Nathan; Martinez, Philippe; Hawkins, Timothy; Jones, Andrew; Emerson, Brian; Debevec, Paul; David Arnold and Alan Chalmers and Franco Niccolucci
    The location, condition, and number of the Parthenon sculptures present a considerable challenge to archeologists and researchers studying this monument. Although the Parthenon proudly stands on the Athenian Acropolis after nearly 2,500 years, many of its sculptures have been damaged or lost. Since the end of the 18th century, its surviving sculptural decorations have been scattered to museums around the world. We propose a strategy for digitally capturing a large number of sculptures while minimizing impact on site and working under time and resource constraints. Our system employs a custom structured light scanner and adapted techniques for organizing, aligning and merging the data. In particular this paper details our effort to digitally record the Parthenon sculpture collection in the Basel Skulpturhalle museum, which exhibits plaster casts of most of the known existing pediments, metopes, and frieze. We demonstrate our results by virtually placing the scanned sculptures on the Parthenon.
  • Item
    Real-Time High-Dynamic Range Texture Mapping
    (The Eurographics Association, 2001) Cohen, Jonathan; Tchou, Chris; Hawkins, Tim; Debevec, Paul; S. J. Gortle and K. Myszkowski
    This paper presents a technique for representing and displaying high dynamic-range texture maps (HDRTMs) using current graphics hardware. Dynamic range in real-world environments often far exceeds the range representable in 8-bit per-channel texture maps. The increased realism afforded by a highdynamic range representation provides improved fidelity and expressiveness for interactive visualization of image-based models. Our technique allows for realtime rendering of scenes with arbitrary dynamic range, limited only by available texture memory. In our technique, high-dynamic range textures are decomposed into sets of 8- bit textures. These 8-bit textures are dynamically reassembled by the graphics hardware s programmable multitexturing system or using multipass techniques and framebuffer image processing. These operations allow the exposure level of the texture to be adjusted continuously and arbitrarily at the time of rendering, correctly accounting for the gamma curve and dynamic range restrictions of the display device. Further, for any given exposure only two 8-bit textures must be resident in texture memory simultaneously. We present implementation details of this technique on various 3D graphics hardware architectures. We demonstrate several applications, including high-dynamic range panoramic viewing with simulated auto-exposure, real-time radiance environment mapping, and simulated Fresnel reflection.
  • Item
    Animatable Facial Reflectance Fields
    (The Eurographics Association, 2004) Hawkins, Tim; Wenger, Andreas; Tchou, Chris; Gardner, Andrew; Göransson, Fredrik; Debevec, Paul; Alexander Keller and Henrik Wann Jensen
    We present a technique for creating an animatable image-based appearance model of a human face, able to capture appearance variation over changing facial expression, head pose, view direction, and lighting condition. Our capture process makes use of a specialized lighting apparatus designed to rapidly illuminate the subject sequentially from many different directions in just a few seconds. For each pose, the subject remains still while six video cameras capture their appearance under each of the directions of lighting. We repeat this process for approximately 60 different poses, capturing different expressions, visemes, head poses, and eye positions. The images for each of the poses and camera views are registered to each other semi-automatically with the help of fiducial markers. The result is a model which can be rendered realistically under any linear blend of the captured poses and under any desired lighting condition by warping, scaling, and blending data from the original images. Finally, we show how to drive the model with performance capture data, where the pose is not necessarily a linear combination of the original captured poses.