Volume 32 (2013)
Permanent URI for this community
Browse
Browsing Volume 32 (2013) by Issue Date
Now showing 1 - 20 of 255
Results Per Page
Sort Options
Item Towards High-dimensional Data Analysis in Air Quality Research(The Eurographics Association and Blackwell Publishing Ltd., 2013) Engel, Daniel; Hummel, Mathias; Hoepel, Florian; Bein, Keith; Wexler, Anthony; Garth, Christoph; Hamann, Bernd; Hagen, Hans; B. Preim, P. Rheingans, and H. TheiselAnalysis of chemical constituents from mass spectrometry of aerosols involves non-negative matrix factorization, an approximation of high-dimensional data in lower-dimensional space. The associated optimization problem is non-convex, resulting in crude approximation errors that are not accessible to scientists. To address this shortcoming, we introduce a new methodology for user-guided error-aware data factorization that entails an assessment of the amount of information contributed by each dimension of the approximation, an effective combination of visualization techniques to highlight, filter, and analyze error features, as well as a novel means to interactively refine factorizations. A case study and the domain-expert feedback provided by the collaborating atmospheric scientists illustrate that our method effectively communicates errors of such numerical optimization results and facilitates the computation of high-quality data factorizations in a simple and intuitive manner.Item Analytic Visibility on the GPU(The Eurographics Association and Blackwell Publishing Ltd., 2013) Auzinger, Thomas; Wimmer, Michael; Jeschke, Stefan; I. Navazo, P. PoulinThis paper presents a parallel, implementation-friendly analytic visibility method for triangular meshes. Together with an analytic filter convolution, it allows for a fully analytic solution to anti-aliased 3D mesh rendering on parallel hardware. Building on recent works in computational geometry, we present a new edge-triangle intersection algorithm and a novel method to complete the boundaries of all visible triangle regions after a hidden line elimination step. All stages of the method are embarrassingly parallel and easily implementable on parallel hardware. A GPU implementation is discussed and performance characteristics of the method are shown and compared to traditional sampling-based rendering methods.Item Spherical Visibility Sampling(The Eurographics Association and Blackwell Publishing Ltd., 2013) Eikel, Benjamin; Jähn, Claudius; Fischer, Matthias; Heide, Friedhelm Meyer auf der; Nicolas Holzschuch and Szymon RusinkiewiczMany 3D scenes (e.g. generated from CAD data) are composed of a multitude of objects that are nested in each other. A showroom, for instance, may contain multiple cars and every car has a gearbox with many gearwheels located inside. Because the objects occlude each other, only few are visible from outside. We present a new technique, Spherical Visibility Sampling (SVS), for real-time 3D rendering of such - possibly highly complex - scenes. SVS exploits the occlusion and annotates hierarchically structured objects with directional visibility information in a preprocessing step. For different directions, the directional visibility encodes which objects of a scene's region are visible from the outside of the regions' enclosing bounding sphere. Since there is no need to store a separate view space subdivision as in most techniques based on preprocessed visibility, a small memory footprint is achieved. Using the directional visibility information for an interactive walkthrough, the potentially visible objects can be retrieved very efficiently without the need for further visibility tests. Our evaluation shows that using SVS allows to preprocess complex 3D scenes fast and to visualize them in real time (e.g. a Power Plant model and five animated Boeing 777 models with billions of triangles). Because SVS does not require hardware support for occlusion culling during rendering, it is even applicable for rendering large scenes on mobile devices.Item Removing the Noise in Monte Carlo Rendering with General Image Denoising Algorithms(The Eurographics Association and Blackwell Publishing Ltd., 2013) Kalantari, Nima Khademi; Sen, Pradeep; I. Navazo, P. PoulinMonte Carlo rendering systems can produce important visual effects such as depth of field, motion blur, and area lighting, but the rendered images suffer from objectionable noise at low sampling rates. Although years of research in image processing has produced powerful denoising algorithms, most of them assume that the noise is spatially-invariant over the entire image and cannot be directly applied to denoise Monte Carlo rendering. In this paper, we propose a new approach that enables the use of any spatially-invariant image denoising technique to remove the noise in Monte Carlo renderings. Our key insight is to use a noise estimation metric to locally identify the amount of noise in different parts of the image, coupled with a multilevel algorithm that denoises the image in a spatially-varying manner using a standard denoising technique. We also propose a new way to perform adaptive sampling that uses the noise estimation metric to identify the noisy regions in which to place more samples. We show that our framework runs in a few seconds with modern denoising algorithms and produces results that outperform state-of-the-art techniques in Monte Carlo rendering.Item Analysis and Visualization of Maps Between Shapes(The Eurographics Association and Blackwell Publishing Ltd., 2013) Ovsjanikov, M.; Ben-Chen, M.; Chazal, F.; Guibas, L.; Holly Rushmeier and Oliver DeussenIn this paper we propose a method for analysing and visualizing individual maps between shapes, or collections of such maps. Our method is based on isolating and highlighting areas where the maps induce significant distortion of a given measure in a multi‐scale way. Unlike the majority of prior work, which focuses on discovering maps in the context of shape matching, our main focus is on evaluating, analysing and visualizing a given map, and the distortion(s) it introduces, in an efficient and intuitive way. We are motivated primarily by the fact that most existing metrics for map evaluation are quadratic and expensive to compute in practice, and that current map visualization techniques are suitable primarily for global map understanding, and typically do not highlight areas where the map fails to meet certain quality criteria in a multi‐scale way. We propose to address these challenges in a unified way by considering the functional representation of a map, and performing spectral analysis on this representation. In particular, we propose a simple multi‐scale method for map evaluation and visualization, which provides detailed multi‐scale information about the distortion induced by a map, which can be used alongside existing global visualization techniques.In this paper we propose a method for analyzing and visualizing individual maps between shapes, or collections of such maps. Our method is based on isolating and highlighting areas where the maps induce significant distortion of a given measure in a multi‐scale way. Unlike the majority of prior work which focuses on discovering maps in the context of shape matching, our main focus is on evaluating, analyzing and visualizing a given map, and the distortion(s) it introduces, in an efficient and intuitive way. We are motivated primarily by the fact that most existing metrics for map evaluation are quadratic and expensive to compute in practice, and that current map visualization techniques are suitable primarily for global map understanding, and typically do not highlight areas where the map fails to meet certain quality criteria in a multi‐scale way. We propose to address these challenges in a unified way by considering the functional representation of a map, and performing spectral analysis on this representation. In particular, we propose a simple multi‐scale method for map evaluation and visualization, which provides detailed multi‐scale information about the distortion induced by a map, which can be used alongside existing global visualization techniques.Item Bilateral Hermite Radial Basis Functions for Contour-based Volume Segmentation(The Eurographics Association and Blackwell Publishing Ltd., 2013) Ijiri, Takashi; Yoshizawa, Shin; Sato, Yu; Ito, Masaaki; Yokota, Hideo; I. Navazo, P. PoulinIn this paper, we propose a novel contour-based volume image segmentation technique. Our technique is based on an implicit surface reconstruction strategy, whereby a signed scalar field is generated from user-specified contours. The key idea is to compute the scalar field in a joint spatial-range domain (i.e., bilateral domain) and resample its values on an image manifold. We introduce a new formulation of Hermite radial basis function (HRBF) interpolation to obtain the scalar field in the bilateral domain. In contrast to previous implicit methods, bilateral HRBF (BHRBF) generates a segmentation boundary that passes through all contours, fits high-contrast image edges if they exist, and has a smooth shape in blurred areas of images. We also propose an acceleration scheme for computing B-HRBF to support a real-time and intuitive segmentation interface. In our experiments, we achieved high-quality segmentation results for regions of interest with high-contrast edges and blurred boundaries.Item Enhancing Bayesian Estimators for Removing Camera Shake(The Eurographics Association and Blackwell Publishing Ltd., 2013) Wang, C.; Yue, Y.; Dong, F.; Tao, Y.; Ma, X.; Clapworthy, G.; Ye, X.; Holly Rushmeier and Oliver DeussenThe aim of removing camera shake is to estimate a sharp version x from a shaken image y when the blur kernel k is unknown. Recent research on this topic evolved through two paradigms called MAP(k) and MAP(x,k). MAP(k) only solves for k by marginalizing the image prior, while MAP(x,k) recovers both x and k by selecting the mode of the posterior distribution. This paper first systematically analyses the latent limitations of these two estimators through Bayesian analysis. We explain the reason why it is so difficult for image statistics to solve the previously reported MAP(x,k) failure. Then we show that the leading MAP(x,k) methods, which depend on efficient prediction of large step edges, are not robust to natural images due to the diversity of edges. MAP(k), although much more robust to diverse edges, is constrained by two factors: the prior variation over different images, and the ratio between image size and kernel size. To overcome these limitations, we introduce an inter‐scale prior prediction scheme and a principled mechanism for integrating the sharpening filter into MAP(k). Both qualitative results and extensive quantitative comparisons demonstrate that our algorithm outperforms state‐of‐the‐art methods.The aim of removing camera shake is to estimate a sharp version x from a shaken image y when the blur kernel k is unknown. Recent research on this topic evolved through two paradigms called MAP(k) and MAP(x,k). MAP(k) only solves for k by marginalizing the image prior, while MAP(x,k) recovers both x and k by selecting the mode of the posterior distribution. This paper first systematically analyzes the latent limitations of these two estimators through Bayesian analysis. We explain the reason why it is so difficult for image statistics to solve the previously reported MAP(x,k) failure. Then we show that the leading MAP(x,k) methods, which depend on efficient prediction of large step edges, are not robust to natural images due to the diversity of edges. MAP(k), although much more robust to diverse edges, is constrained by two factors: the prior variation over different images, and the ratio between image size and kernel size.Item A Collaborative Digital Pathology System for Multi‐Touch Mobile and Desktop Computing Platforms(The Eurographics Association and Blackwell Publishing Ltd., 2013) Jeong, W.; Schneider, J.; Hansen, A.; Lee, M.; Turney, S. G.; Faulkner‐Jones, B. E.; Hecht, J. L.; Najarian, R.; Yee, E.; Lichtman, J. W.; Pfister, H.; Holly Rushmeier and Oliver DeussenCollaborative slide image viewing systems are becoming increasingly important in pathology applications such as telepathology and E‐learning. Despite rapid advances in computing and imaging technology, current digital pathology systems have limited performance with respect to remote viewing of whole slide images on desktop or mobile computing devices. In this paper we present a novel digital pathology client–server system that supports collaborative viewing of multi‐plane whole slide images over standard networks using multi‐touch‐enabled clients. Our system is built upon a standard HTTP web server and a MySQL database to allow multiple clients to exchange image and metadata concurrently. We introduce a domain‐specific image‐stack compression method that leverages real‐time hardware decoding on mobile devices. It adaptively encodes image stacks in a decorrelated colour space to achieve extremely low bitrates (0.8 bpp) with very low loss of image quality. We evaluate the image quality of our compression method and the performance of our system for diagnosis with an in‐depth user study.Collaborative slide image viewing systems are becoming increasingly important in pathology applications such as telepathology and E‐learning. Despite rapid advances in computing and imaging technology, current digital pathology systems have limited performance with respect to remote viewing of whole slide images on desktop or mobile computing devices. In this paper we present a novel digital pathology client‐server systems that supports collaborative viewing of multi‐plane whole slide images over standard networks using multi‐touch enabled clients. Our system is built upon a standard HTTP web server and a MySQL database to allow multiple clients to exchange image and metadata concurrently.Item Level-of-Detail Streaming and Rendering using Bidirectional Sparse Virtual Texture Functions(The Eurographics Association and Blackwell Publishing Ltd., 2013) Schwartz, Christopher; Ruiters, Roland; Klein, Reinhard; B. Levy, X. Tong, and K. YinBidirectional Texture Functions (BTFs) are among the highest quality material representations available today and thus well suited whenever an exact reproduction of the appearance of a material or complete object is required. In recent years, BTFs have started to find application in various industrial settings and there is also a growing interest in the cultural heritage domain. BTFs are usually measured from real-world samples and easily consist of tens or hundreds of gigabytes. By using data-driven compression schemes, such as matrix or tensor factorization, a more compact but still faithful representation can be derived. This way, BTFs can be employed for real-time rendering of photo-realistic materials on the GPU. However, scenes containing multiple BTFs or even single objects with high-resolution BTFs easily exceed available GPU memory on today's consumer graphics cards unless quality is drastically reduced by the compression. In this paper, we propose the Bidirectional Sparse Virtual Texture Function, a hierarchical level-of-detail approach for the real-time rendering of large BTFs that requires only a small amount of GPU memory. More importantly, for larger numbers or higher resolutions, the GPU and CPU memory demand grows only marginally and the GPU workload remains constant. For this, we extend the concept of sparse virtual textures by choosing an appropriate prioritization, finding a trade off between factorization components and spatial resolution. Besides GPU memory, the high demand on bandwidth poses a serious limitation for the deployment of conventional BTFs. We show that our proposed representation can be combined with an additional transmission compression and then be employed for streaming the BTF data to the GPU from from local storage media or over the Internet. In combination with the introduced prioritization this allows for the fast visualization of relevant content in the users field of view and a consecutive progressive refinement.Item Animation-Aware Quadrangulation(The Eurographics Association and Blackwell Publishing Ltd., 2013) Marcias, Giorgio; Pietroni, Nico; Panozzo, Daniele; Puppo, Enrico; Sorkine-Hornung, Olga; Yaron Lipman and Hao ZhangGeometric meshes that model animated characters must be designed while taking into account the deformations that the shape will undergo during animation. We analyze an input sequence of meshes with point-to-point correspondence, and we automatically produce a quadrangular mesh that fits well the input animation. We first analyze the local deformation that the surface undergoes at each point, and we initialize a cross field that remains as aligned as possible to the principal directions of deformation throughout the sequence. We then smooth this cross field based on an energy that uses a weighted combination of the initial field and the local amount of stretch. Finally, we compute a field-aligned quadrangulation with an off-the-shelf method. Our technique is fast and very simple to implement, and it significantly improves the quality of the output quad mesh and its suitability for character animation, compared to creating the quad mesh based on a single pose. We present experimental results and comparisons with a state-of-the-art quadrangulation method, on both sequences from 3D scanning and synthetic sequences obtained by a rough animation of a triangulated model.Item Modelling Bending Behaviour in Cloth Simulation Using Hysteresis(The Eurographics Association and Blackwell Publishing Ltd., 2013) Wong, T. H.; Leach, G.; Zambetta, F.; Holly Rushmeier and Oliver DeussenReal cloth exhibits bending effects, such as residual curvatures and permanent wrinkles. These are typically explained by bending plastic deformation due to internal friction in the fibre and yarn structure. Internal friction also gives rise to energy dissipation which significantly affects cloth dynamic behaviour. In textile research, hysteresis is used to analyse these effects, and can be modelled using complex friction terms at the fabric geometric structure level. The hysteresis loop is central to the modelling and understanding of elastic and inelastic (plastic) behaviour, and is often measured as a physical characteristic to analyse and predict fabric behaviour. However, in cloth simulation in computer graphics the use of hysteresis to capture these effects has not been reported so far. Existing approaches have typically used plasticity models for simulating plastic deformation. In this paper, we report on our investigation into experiments using a simple mathematical approximation to an ideal hysteresis loop at a high level to capture the previously mentioned effects. Fatigue weakening effects during repeated flexural deformation are also considered based on the hysteresis model. Comparisons with previous bending models and plasticity methods are provided to point out differences and advantages. The method requires only incremental extra computation time.Real cloth exhibits bending effects such as residual curvatures and permanent wrinkles. These are typically explained by bending plastic deformation due to internal friction in the fibre and yarn structure. Internal friction also gives rise to energy dissipation which significantly affects cloth dynamic behaviour. In textile research hysteresis is used to analyse these effects, and can be modelled using complex friction terms at the fabric geometric structure level. The hysteresis loop is central to the modelling and understanding of elastic and inelastic (plastic) behaviour, and is often measured as a physical characteristic to analyse and predict fabric behaviour. However, in cloth simulation in computer graphics the use of hysteresis to capture these effects has not been reported so far. Existing approaches have typically used plasticity models for simulating plastic deformation. In this paper we report on our investigation into experiments using a simple mathematical approximation to an ideal hysteresis loop at a high level to capture the previously mentioned effects. Fatigue weakening effects during repeated flexural deformation are also considered based on the hysteresis model. Comparisons with previous bending models and plasticity methods are provided to point out differences and advantages. The method requires only incremental extra computation time.Item Lighting Simulation of Augmented Outdoor Scene Based on a Legacy Photograph(The Eurographics Association and Blackwell Publishing Ltd., 2013) Xing, Guanyu; Zhou, Xuehong; Peng, Qunsheng; Liu, Yanli; Qin, Xueying; B. Levy, X. Tong, and K. YinWe propose a novel approach to simulate the illumination of augmented outdoor scene based on a legacy photograph. Unlike previous works which only take surface radiosity or lighting related prior information as the basis of illumination estimation, our method integrates both of these two items. By adopting spherical harmonics, we deduce a linear model with only six illumination parameters. The illumination of an outdoor scene is finally calculated by solving a linear least square problem with the color constraint of the sunlight and the skylight. A high quality environment map is then set up, leading to realistic rendering results. We also explore the problem of shadow casting between real and virtual objects without knowing the geometry of objects which cast shadows. An efficient method is proposed to project complex shadows (such as tree's shadows) on the ground of the real scene to the surface of the virtual object with texture mapping. Finally, we present an unified scheme for image composition of a real outdoor scene with virtual objects ensuring their illumination consistency and shadow consistency. Experiments demonstrate the effectiveness and flexibility of our method.Item Scalable Symmetry Detection for Urban Scenes(The Eurographics Association and Blackwell Publishing Ltd., 2013) Kerber, J.; Bokeloh, M.; Wand, M.; Seidel, H.-P.; Holly Rushmeier and Oliver DeussenIn this paper, we present a novel method for detecting partial symmetries in very large point clouds of 3D city scans. Unlike previous work, which has only been demonstrated on data sets of a few hundred megabytes maximum, our method scales to very large scenes: We map the detection problem to a nearest-eighbour problem in a low-dimensional feature space, and follow this with a cascade of tests for geometric clustering of potential matches. Our algorithm robustly handles noisy real-world scanner data, obtaining a recognition performance comparable to that of state-of-the-art methods. In practice, it scales linearly with scene size and achieves a high absolute throughput, processing half a terabyte of scanner data overnight on a dual socket commodity PC.In this paper we present a novel method for detecting partial symmetries in very large point clouds of 3D city scans. Unlike previous work, which has only been demonstrated on data sets of a few hundred megabytes maximum, our method scales to very large scenes: We map the detection problem to a nearest-eighbor problem in a lowdimensional feature space, and follow this with a cascade of tests for geometric clustering of potential matches. Our algorithm robustly handles noisy real-world scanner data, obtaining a recognition performance comparable to that of state-of-the-art methods. In practice, it scales linearly with scene size and achieves a high absolute throughput, processing half a terabyte of scanner data overnight on a dual socket commodity PC.Item Exponential Soft Shadow Mapping(The Eurographics Association and Blackwell Publishing Ltd., 2013) Shen, Li; Feng, Jieqing; Yang, Baoguang; Nicolas Holzschuch and Szymon RusinkiewiczIn this paper we present an image-based algorithm to render visually plausible anti-aliased soft shadows in real time. Our technique employs a new shadow pre-filtering method based on an extended exponential shadow mapping theory. The algorithm achieves faithful contact shadows by adopting an optimal approximation to exponential shadow reconstruction function. Benefiting from a novel overflow free summed area table tile grid data structure, numerical stability is guaranteed and error filtering response is avoided. By integrating an adaptive anisotropic filtering method, the proposed algorithm can produce high quality smooth shadows both in large penumbra areas and in high frequency sharp transitions, meanwhile guarantee cheap memory consumption and high performance.Item Geosemantic Snapping for Sketch-Based Modeling(The Eurographics Association and Blackwell Publishing Ltd., 2013) Shtof, Alex; Agathos, Alexander; Gingold, Yotam; Shamir, Ariel; Cohen-Or, Daniel; I. Navazo, P. PoulinModeling 3D objects from sketches is a process that requires several challenging problems including segmentation, recognition and reconstruction. Some of these tasks are harder for humans and some are harder for the machine. At the core of the problem lies the need for semantic understanding of the shape's geometry from the sketch. In this paper we propose a method to model 3D objects from sketches by utilizing humans specifically for semantic tasks that are very simple for humans and extremely difficult for the machine, while utilizing the machine for tasks that are harder for humans. The user assists recognition and segmentation by choosing and placing specific geometric primitives on the relevant parts of the sketch. The machine first snaps the primitive to the sketch by fitting its projection to the sketch lines, and then improves the model globally by inferring geosemantic constraints that link the different parts. The fitting occurs in real-time, allowing the user to be only as precise as needed to have a good starting configuration for this non-convex optimization problem. We evaluate the accessibility of our approach with a user study.Item Efficient Interpolation of Articulated Shapes Using Mixed Shape Spaces(The Eurographics Association and Blackwell Publishing Ltd., 2013) Marras, S.; Cashman, T. J.; Hormann, K.; Holly Rushmeier and Oliver DeussenInterpolation between compatible triangle meshes that represent different poses of some object is a fundamental operation in geometry processing. A common approach is to consider the static input shapes as points in a suitable shape space and then use simple linear interpolation in this space to find an interpolated shape. In this paper, we present a new interpolation technique that is particularly tailored for meshes that represent articulated shapes. It is up to an order of magnitude faster than state‐of‐the‐art methods and gives very similar results. To achieve this, our approach introduces a novel shape space that takes advantage of the underlying structure of articulated shapes and distinguishes between rigid parts and non‐rigid joints. This allows us to use fast vertex interpolation on the rigid parts and resort to comparatively slow edge‐based interpolation only for the joints.Interpolation between compatible triangle meshes that represent different poses of some object is a fundamental operation in geometry processing. A common approach is to consider the static input shapes as points in a suitable shape space and then use simple linear interpolation in this space to find an interpolated shape. In this paper, we present a new interpolation technique that is particularly tailored for meshes that represent articulated shapes. It is up to an order of magnitude faster than state‐of‐the‐art methods and gives very similar results. To achieve this, our approach introduces a novel shape space that takes advantage of the underlying structure of articulated shapes and distinguishes between rigid parts and non‐rigid joints.Item Fabrication-aware Design with Intersecting Planar Pieces(The Eurographics Association and Blackwell Publishing Ltd., 2013) Schwartzburg, Yuliy; Pauly, Mark; I. Navazo, P. PoulinWe propose a computational design approach to generate 3D models composed of interlocking planar pieces. We show how intricate 3D forms can be created by sliding the pieces into each other along straight slits, leading to a simple construction that does not require glue, screws, or other means of support. To facilitate the design process, we present an abstraction model that formalizes the main geometric constraints imposed by fabrication and assembly, and incorporates conditions on the rigidity of the resulting structure.We show that the tight coupling of constraints makes manual design highly nontrivial and introduce an optimization method to automate constraint satisfaction based on an analysis of the constraint relation graph. This algorithm ensures that the planar parts can be fabricated and assembled. We demonstrate the versatility of our approach by creating 3D toy models, an architectural design study, and several examples of functional furniture.Item Shape Matching via Quotient Spaces(The Eurographics Association and Blackwell Publishing Ltd., 2013) Ovsjanikov, Maks; Mérigot, Quentin; Patraucean, Viorica; Guibas, Leonidas; Yaron Lipman and Hao ZhangWe introduce a novel method for non-rigid shape matching, designed to address the symmetric ambiguity problem present when matching shapes with intrinsic symmetries. Unlike the majority of existing methods which try to overcome this ambiguity by sampling a set of landmark correspondences, we address this problem directly by performing shape matching in an appropriate quotient space, where the symmetry has been identified and factored out. This allows us to both simplify the shape matching problem by matching between subspaces, and to return multiple solutions with equally good dense correspondences. Remarkably, both symmetry detection and shape matching are done without establishing any landmark correspondences between either points or parts of the shapes. This allows us to avoid an expensive combinatorial search present in most intrinsic symmetry detection and shape matching methods. We compare our technique with state-of-the-art methods and show that superior performance can be achieved both when the symmetry on each shape is known and when it needs to be estimated.Item AmniVis - A System for Qualitative Exploration of Near-Wall Hemodynamics in Cerebral Aneurysms(The Eurographics Association and Blackwell Publishing Ltd., 2013) Neugebauer, Mathias; Lawonn, Kai; Beuing, Oliver; Berg, Philipp; Janiga, Gabor; Preim, Bernhard; B. Preim, P. Rheingans, and H. TheiselThe qualitative exploration of near-wall hemodynamics in cerebral aneurysms provides important insights for risk assessment. For instance, a direct relation between complex flow patterns and aneurysm formation could be observed. Due to the high complexity of the underlying time-dependent flow data, the exploration is challenging, in particular for medical researchers not familiar with such data. We present the AmniVis-Explorer, a system that is designed for the preparation of a qualitative medical study. The provided features were developed in close collaboration with medical researchers involved in the study. This comprises methods for a purposeful selection of surface regions of interest and a novel approach to provide a 2D overview of flow patterns that are represented by streamlines at these regions. Furthermore, we present a specialized interface that supports binary classification of patterns and temporal exploration as well as methods for selection, highlighting and automatic 3D navigation to particular patterns. Based on eight representative datasets, we conducted informal interviews with two bordcertified radiologists and a flow expert to evaluate the system. It was confirmed that the AmniVis-Explorer allows for an easy selection, qualitative exploration and classification of near-wall flow patterns that are represented by streamlines.Item Consistent Shape Maps via Semidefinite Programming(The Eurographics Association and Blackwell Publishing Ltd., 2013) Huang, Qi-Xing; Guibas, Leonidas; Yaron Lipman and Hao ZhangRecent advances in shape matching have shown that jointly optimizing the maps among the shapes in a collection can lead to significant improvements when compared to estimating maps between pairs of shapes in isolation. These methods typically invoke a cycle-consistency criterion - the fact that compositions of maps along a cycle of shapes should approximate the identity map. This condition regularizes the network and allows for the correction of errors and imperfections in individual maps. In particular, it encourages the estimation of maps between dissimilar shapes by compositions of maps along a path of more similar shapes. In this paper, we introduce a novel approach for obtaining consistent shape maps in a collection that formulates the cycle-consistency constraint as the solution to a semidefinite program (SDP). The proposed approach is based on the observation that, if the ground truth maps between the shapes are cycle-consistent, then the matrix that stores all pair-wise maps in blocks is low-rank and positive semidefinite. Motivated by recent advances in techniques for low-rank matrix recovery via semidefinite programming, we formulate the problem of estimating cycle-consistent maps as finding the closest positive semidefinite matrix to an input matrix that stores all the initial maps. By analyzing the Karush-Kuhn-Tucker (KKT) optimality condition of this program, we derive theoretical guarantees for the proposed algorithm, ensuring the correctness of the recovery when the errors in the inputs maps do not exceed certain thresholds. Besides this theoretical guarantee, experimental results on benchmark datasets show that the proposed approach outperforms state-of-the-art multiple shape matching methods.