7 results
Search Results
Now showing 1 - 7 of 7
Item Schnelle Kurven- und Flächendarstellung auf grafischen Sichtgeräten(1974-09-05) Straßer, Wolfgang;Für die Anwendung beim interaktiven Entwurf von Kurven und Flächen auf grafischen Sichtgeräten werden im ersten Teil der Arbeit bekannte mathematische Verfahren in einer kompakten, einheitlichen Matrizenschreibweise dargestellt. Als neues Verfahren wird die B-Spline-Approximation hinsichtlich ihrer Eigenschaften und Möglichkeiten für den rechnergestützten Entwurf untersucht und an Beispielen erläutert. Die B Spline - Approximation erweist sich nicht nur als das universellste und am leichtesten zu handhabende mathematische Verfahren, sondern ist auch für eine Hardware - Erzeugung von Kurven und Flächen am besten geeignet. Ein neues Verfahren zur Schattierung von Flächen in Echtzeit wird angegeben und durch Bilder belegt. Im zweiten Teil werden unter Berücksichtigung neuer Bauelemente digitale Komponenten für einen Displayprozessor angegeben: Vektorgenerator, Kreisgenerator, Matrizenmultiplizierer, Dividierer, Kurven - und Flächengenerator.Item Feature Centric Volume Visualization(Malik, 11.12.2009) Malik, Muhammad Muddassir;This thesis presents techniques and algorithms for the effective exploration of volumetric datasets. The Visualization techniques are designed to focus on user specified features of interest. The proposed techniques are grouped into four chapters namely feature peeling, computation and visualization of fabrication artifacts, locally adaptive marching cubes, and comparative visualization for parameter studies of dataset series. The presented methods enable the user to efficiently explore the volumetric dataset for features of interest.Feature peeling is a novel rendering algorithm that analyzes ray profiles along lines of sight. The profiles are subdivided according to encountered peaks and valleys at so called transition points. The sensitivity of these transition points is calibrated via two thresholds. The slope threshold is based on the magnitude of a peak following a valley, while the peeling threshold measures the depth of the transition point relative to the neighboring rays. This technique separates the dataset into a number of feature layers.Fabrication artifacts are of prime importance for quality control engineers for first part inspection of industrial components. Techniques are presented in this thesis to measure fabrication artifacts through direct comparison of a reference CAD model with the corresponding industrial 3D X-ray computed tomography volume. Information from the CAD model is used to locate corresponding points in the volume data. Then various comparison metrics are computed to measure differences (fabrication artifacts) between the CAD model and the volumetric dataset. The comparison metrics are classified as either geometry-driven comparison techniques or visual-driven comparison techniques.The locally adaptive marching cubes algorithm is a modification of the marching cubes algorithm where instead of a global iso-value, each grid point has its own iso-value. This defines an iso-value field, which modifies the case identification process in the algorithm. An iso-value field enables the algorithm to correct biases within the dataset like low frequency noise, contrast drifts, local density variations, and other artifacts introduced by the measurement process. It can also be used for blending between different iso-surfaces (e.g., skin, and bone in a medical dataset).Comparative visualization techniques are proposed to carry out parameter studies for the special application area of dimensional measurement using industrial 3D X-ray computed tomography. A dataset series is generated by scanning a specimen multiple times by varying parameters of the scanning device. A high resolution series is explored using a planar reformatting based visualization system. A multi-image view and an edge explorer are proposed for comparing and visualizing gray values and edges of several datasets simultaneously. For fast data retrieval and convenient usability the datasets are bricked and efficient data structures are used.Item Selected Quality Metrics for Digital Passport Photographs(Gonzalez Castillo, 12.12.2007) Gonzalez Castillo, Oriana Yuridia;Facial images play a significant role as biometric identifier. The accurate identification of individuals is nowadays becoming more and more important and can have a big impact on security. The good quality of facial images in passport photographs is essential for accurate identification of individuals. The quality acceptance procedure presently used is based on human visual perception and thus subjective and not standardized. Existing algorithms for measuring image quality are applied for all types of images not focused on the quality determination of passport photographs. However there are few documents existing, defining conformance requirements for the determination of digital passport photographs quality. A major document is named "Biometrics Deployment of Machine Readable Travel Documents", published by the International Civil Aviation Organization (ICAO). This thesis deals with the development of metrics for the automated determination of the quality and grade of acceptance of digital passport photographs without having any reference image available. Based on the above mentioned document of the ICAO, quality conformance sentences and related attributes are abstracted with self-developed methods. About fifty passport photographs haven been taken under strictly controlled conditions to fulfill all requirements given by the above mentioned document. Different kinds of algorithms were implemented to determine values for image attributes and to detect the face features. This ground truth database was the source to "translate" natural language into numeric values to describe how "good quality" is represented by numbers. No priority for the evaluation of attributes was given in the ICAO document. For that reason an international online and on-site survey was developed to explore the opinion of user experts whose work is related to passport photographs. They were asked to evaluate the relevance of different types of attributes related to a passport photograph. Based on that survey, weights for the different types of attributes have been calculated. These weights express the different importances of the attributes for the evaluation process. Three different metrics, expressed by the Photograph-/Image-/Biometric Attributes-Quality Indexes (PAQI, IAQI, BAQI) have been developed to obtain reference values for the quality determination of a passport photograph. Experiments are described to show, that the quality of a selected digital passport photograph can be measured and different attributes, which have an impact on the quality and on the recognition of face features can be identified. Critical issues are discussed and the thesis closes with recommendations given for further research approaches.Item Processing Semantically EnrichedContent for Interactive 3D Visualizations(Settgast, 13-05-28) Settgast, Volker;Interactive 3D graphics has become an essential tool in many fields of application: In manufacturingcompanies, e.g., new products are planned and tested digitally. The effect of newdesigns and testing of ergonomic aspects can be done with pure virtual models. Furthermore,the training of procedures on complex machines is shifted to the virtual world. In that waysupport costs for the usage of the real machine are reduced, and effective forms of trainingevaluation are possible.Virtual reality helps to preserve and study cultural heritage: Artifacts can be digitalized andpreserved in a digital library making them accessible for a larger group of people. Variousforms of analysis can be performed on the digital objects which are hardly possible to performon the real objects or would destroy them. Using virtual reality environments like large projectionwalls helps to show virtual scenes in a realistic way. The level of immersion can be furtherincreased by using stereoscopic displays and by adjusting the images to the head position ofthe observer.One challenge with virtual reality is the inconsistency in data. Moving 3D content from a usefulstate, e.g., from a repository of artifacts or from within a planning work flow to an interactivepresentation is often realized with degenerative steps of preparation. The productiveness ofPowerwalls and CAVEsTM is called into question, because the creation of interactive virtualworlds is a one way road in many cases: Data has to be reduced in order to be manageable bythe interactive renderer and to be displayed in real time on various target platforms. The impactof virtual reality can be improved by bringing back results from the virtual environment to auseful state or even better: never leave that state.With the help of semantic data throughout the whole process, it is possible to speed up thepreparation steps and to keep important information within the virtual 3D scene. The integratedsupport for semantic data enhances the virtual experience and opens new ways of presentation.At the same time the goal becomes feasible to bring back data from the presentation for examplein a CAVETM to the working process. Especially in the field of cultural heritage it isessential to store semantic data with the 3D artifacts in a sustainable way.Within this thesis new ways of handling semantic data in interactive 3D visualizations arepresented. The whole process of 3D data creation is demonstrated with regard to semanticsustainability. The basic terms, definitions and available standards for semantic markups aredescribed. Additionally, a method is given to generate semantics of higher order automatically.An important aspect is the linking of semantic information with 3D data. The thesis gives twosuggestions on how to store and publish the valuable combination of 3D content and semanticmarkup in a sustainable way.Different environments for virtual reality are compared and their special needs are pointed out.Primarily the DAVE in Graz is presented in detail, and novel ways of user interactions in suchviiviii Abstractimmersive environments are proposed. Finally applications in the field of cultural heritage, securityand mobility are presented.The presented symbiosis of 3D content and semantic information is an important contributionfor improving the usage of virtual environments in various fields of applications.Item Accelerating Geometric Queries for Computer Graphics: Algorithms, Techniques, and Applications(0000-08-16) Evangelou Iordanis;In the ever-evolving context of Computer Graphics, the demand for realistic and real-time virtual environments and interaction with digitised or born-digital content has exponentially grown. Whether in gaming, production rendering, computer-aided design, reverse engineering, geometry processing, and understanding or simulation tasks, the ability to rapidly and accurately perform geometric queries of any type is crucial. The actual form of a geometric query varies depending on the task at hand, application domain, input representation, and adopted methodology. These queries may involve intersection tests as in the case of ray tracing, spatial queries, such as needed for recovering nearest sample neighbours, geometry registration in order to classify polygonal primitive inputs, or even virtual scene understanding in order to suggest and embed configurations as in the case of light optimisation and placement. As the applications of these algorithms and, consequently, their complexity continuously grow, traditional geometric queries fall short, when naïvely adopted and integrated in practical scenarios. Therefore, these methods face limitations in terms of computational efficiency and query bandwidth. This is particularly pronounced in scenarios, where vast amounts of geometric data must be processed in interactive or even real-time rates. More often than not, one has to inspect and understand the internal mechanics and theory of the algorithms invoking these geometric queries. This is particularly useful in order to devise appropriately tailored procedures to the underline task, hence maximise their efficiency, both in terms of performance and output quality. As a result, there is an enormous area of research that explores innovative approaches to geometric query acceleration, addressing the challenges posed. The primary focus of this research was to develop innovative methods for accelerating geometric queries within the domain of Computer Graphics. This entails a comprehensive exploration of algorithmic optimisations that include the development of advanced data structures and neural network architectures, tailored to efficiently handle geometric collections. This research addressed not only the computational complexity of individual queries, but also the adaptability of the proposed solutions to diverse applications and scenarios primary within the realm of Computer Graphics but also intersecting domains. The outcome of this research holds the potential to influence the fields that adopt these geometric query methodologies by addressing the associated computational challenges and unlocking novel directions for real-time rendering, interactive simulation, and immersive virtual experiences. More specifically, the contributions of this thesis are divided into two broad directions for accelerating geometric queries: a) global illumination-related, hardware-accelerated nearestneighbour queries and b) application of deep learning to the definition of novel data structures and geometric query methods. In the first part, we consider the task of real-time global illumination using photon density estimators. In particular we investigate scenarios where complex illumination effects, such as caustics, that can mainly be handled from the illumination theory regarding progressive photon mapping algorithms, require vast amount of rays to be traced from both the eye sensor and the light sources. Photons emanating from lights are cached into the surface geometry or volumetric media and must be gathered at query locations on the paths traced from the camera sensor. To achieve real-time frame rates, gathering, an expensive operation, needs to be efficiently handled. This is accomplished by adapting screen space ray tracing and splatting to the hardware-accelerated rasterisation pipeline. Since the gathering phase is an inherent subcategory of nearest neighbours search, we also propose how to efficiently generalise this concept to any form of task by exploiting existing low-level hardware accelerated ray tracing frameworks. Effectively boosting the inference phase by orders of magnitude compared to the traditional strategies involved. In the second part, we shift our focus to a more generic class of geometric queries. The first work involves accurate and fast shape classification using neural networks architectures. We demonstrate that a hybrid architecture, which processes orientation and a voxel-based representation of the input, is capable of processing hard-to-distinguish solid geometry from the context of building information models. Second, we consider the form of geometric queries in the context of scene understanding. More precisely, optimising the placement and light intensities of luminaries in urban places can be a computationally intricate task especially for large inputs and conflicting constraints. Methodologies employed in the literature usually make assumptions about the input representation to mitigate the intractable nature of this task. In this thesis, we approach this problem with a holistic solution that can produce feasible and diverse proposals in real time by adopting a neural-based generative modelling methodology. Finally, we propose a novel and general approach to solve recursive cost evaluators for the construction of geometric query acceleration data structures. This work establishes a new research direction for the construction of data structures guided by recursive cost functions using neural-based architectures. Our goal is to overcome the exhaustive but intractable evaluation of the cost function, in order to generate a high-quality data structure for spatial queries.Item Process-Based Design of Multimedia Annotation Systems(Hofmann, 06.12.2010) Hofmann, Cristian Erick;Annotation of digital multimedia comprises a range of different application scenarios,supported media and annotation formats, and involved techniques. Accordingly, recentannotation environments provide numerous functions and editing options. This resultsin complexly designed user interfaces, so that human operators are disoriented withrespect to task procedures and the selection of accurate tools.In this thesis we contribute to the operability of multimedia annotation systems in severalnovel ways. We introduce concepts to support annotation processes, at whichprinciples of Workflow Management are transferred. Particularly focusing on the behaviorof graphical user interface components, we achieve a significant decrease ofuser disorientation and processing times. In three initial studies, we investigate multimedia annotation from two differentperspectives. A Feature-oriented Analysis of Annotation Systems describesapplied techniques and forms of processed data. Moreover, a conducted EmpiricalStudy and Literature Survey elucidate different practices of annotation,considering case examples and proposed workflow models. Based on the results of the preliminary studies, we establish a Generic ProcessModel of Multimedia Annotation, summarizing identified sub-processes andtasks, their sequential procedures, applied services, as well as involved data formats. By a transfer into a Formal Process Specification we define information entitiesand their interrelations, constituting a basis for workflow modeling, and declaringtypes of data which need to be managed and processed by the technicalsystem. We propose a Reference Architecture Model, which elucidates the structure andbehavior of a process-based annotation system, also specifying interactions andinterfaces between different integrated components. As central contribution of this thesis, we introduce a concept for Process-drivenUser Assistance. This implies visual and interactive access to a given workflow,representation of the workflow progress, and status-dependent invocationof tools.We present results from a User Study conducted by means of the so-called SemAnnotframework. We implemented this novel framework based on our considerationsmentioned above. In this study we show that the application of our proposed conceptfor process-driven user assistance leads to strongly significant improvements ofthe operability of multimedia annotation systems. These improvements are associatedwith the partial aspects efficiency, learnability, usability, process overview, and usersatisfaction.Item Filament-Based Smoke(Weißmann, 15. 9. 2010) Weißmann, Steffen;This cumulative dissertation presents a complete model for simulating smoke usingpolygonal vortex filaments. Based on a Hamiltonian system for the dynamics ofsmooth vortex filaments, we develop an effcient and robust algorithm that allowssimulations in real time. The discrete smoke ring ow allows to use coarse polygonalvortex filaments, while preserving the qualitative behavior of the smooth system. Themethod handles rigidly moving obstacles as boundary conditions and simulates vortexshedding. Obstacles as well as shed vorticity are also represented as polygonal filaments.Variational vortex reconnection prevents the exponential increase of filamentlength over time, without significant modification of the uid velocity field. Thisallows for simulations over extended periods of time. The algorithm reproduces variousreal experiments (colliding vortex rings, wakes) that are challenging for classicalmethods.