2011https://diglib.eg.org:443/handle/10.2312/81222024-03-29T07:34:17Z2024-03-29T07:34:17ZPrivacy and Security Assessment of Biometric Template ProtectionZhou, Xuebinghttps://diglib.eg.org:443/handle/10.2312/82762017-03-16T11:29:50Z2011-09-19T00:00:00ZPrivacy and Security Assessment of Biometric Template Protection
Zhou, Xuebing
Biometrics enables convenient authentication based on a person s physical or behavioral characteristics. In comparisonwith knowledge- or token-based methods, it links an identity directly to its owner. Furthermore, it can notbe forgotten or handed over easily. As biometric techniques have become more and more efficient and accurate,they are widely used in numerous areas. Among the most common application areas are physical and logicalaccess controls, border control, authentication in banking applications and biometric identification in forensics.In this growing field of biometric applications, concerns about privacy and security cannot be neglected. Theadvantages of biometrics can revert to the opposite easily. The potential misuse of biometric information is notlimited to the endangerment of user privacy, since biometric data potentially contain sensitive information likegender, race, state of health, etc. Different applications can be linked through unique biometric data. Additionally,identity theft is a severe threat to identity management, if revocation and reissuing of biometric referencesare practically impossible. Therefore, template protection techniques are developed to overcome these drawbacksand limitations of biometrics. Their advantage is the creation of multiple secure references from biometric data.These secure references are supposed to be unlinkable and non-invertible in order to achieve the desired level ofsecurity and to fulfill privacy requirements.The existing algorithms can be categorized into transformation-based approaches and biometric cryptosystems.The transformation-based approaches deploy different transformation or randomization functions, whilethe biometric cryptosystems construct secrets from biometric data. The integration in biometric systems is commonlyaccepted in research and their feasibility according to the recognition performance is proved. Despiteof the success of biometric template protection techniques, their security and privacy properties are investigatedonly limitedly.This predominant deficiency is addressed in this thesis and a systematic evaluation framework for biometrictemplate protection techniques is proposed and demonstrated:Firstly, three main protection goals are identified based on the review of the requirements on template protectiontechniques. The identified goals can be summarized as security, privacy protection ability and unlinkability.Furthermore, the definitions of privacy and security are given, which allow to quantify the computational complexityestimating a pre-image of a secure template and to measure the hardness of retrieving biometric datarespectively.Secondly, three threat models are identified as important prerequisites for the assessment. Threat modelsdefine the information about biometric data, system parameters and functions that can be accessed during theevaluation or an attack. The first threat model, so called naive model, assumes that an adversary has very limitedinformation about a system. In the second threat model, the advanced model, we apply Kerckhoffs principleand assume that essential details of algorithms as well as properties of biometric data are known. The last threatmodel assumes that an adversary owns large amount of biometric data and this allows him to exploit inaccuracyof biometric systems. It is called the collision threat model.Finally, a systematic framework for privacy and security assessment is proposed. Before an evaluation process,protection goals and threat models need to be clarified. Based on these, the metrics measuring different protectiongoals as well as an evaluation process determining the metrics will be developed. Both theoretical evaluation withmetrics such as entropy, mutual information and practical evaluation based on individual attacks can be used.
2011-09-19T00:00:00ZGPU Data Structures for Graphics and VisionZiegler, Gernothttps://diglib.eg.org:443/handle/10.2312/82772017-03-16T11:29:50Z2011-05-06T00:00:00ZGPU Data Structures for Graphics and Vision
Ziegler, Gernot
Graphics hardware has in recent years become increasingly programmable, and its programming APIs use the stream processor model to expose massive parallelization to the programmer. Unfortunately, the inherent restrictions of the stream processor model, used by the GPU in order to maintain high performance, often pose a problem in porting CPU algorithms for both video and volume processing to graphics hardware. Serial data dependencies which accelerate CPU processing are counterproductive for the data-parallel GPU. This thesis demonstrates new ways for tackling well-known problems of large scale video/volume analysis. In some instances, we enable processing on the restricted hardware model by reintroducing algorithms from early computer graphics research. On other occasions, we use newly discovered, hierarchical data structures to circumvent the random-access read/fixed write restriction that had previously kept sophisticated analysis algorithms from running solely on graphics hardware. For 3D processing, we apply known game graphics concepts such as mip-maps, projective texturing, and dependent texture lookups to show how video/volume processing can benefit algorithmically from being implemented in a graphics API. The novel GPU data structures provide drastically increased processing speed, and lift processing heavy operations to real-time performance levels, paving the way for new and interactive vision/graphics applications.
2011-05-06T00:00:00ZMultimodal Training of Maintenance andAssembly Skills Based on Augmented RealityWebel, Sabinehttps://diglib.eg.org:443/handle/10.2312/82752017-03-16T11:29:50Z2011-12-07T00:00:00ZMultimodal Training of Maintenance andAssembly Skills Based on Augmented Reality
Webel, Sabine
The training of technicians in the acquisition of new maintenance and assembly tasks is an importantfactor in industry. As the complexity of these tasks can be enormous, the training of technicians toacquire the necessary skills to perform them efficiently is a challenging point. However, traditionaltraining programs are usually highly theoretical and it is difficult for the trainees to transfer the acquiredtheoretical knowledge about the task to the real task conditions, or rather, to the physical performanceof the task. In addition, traditional training programs are often expensive in terms of effort and cost.Previous research has shown that Augmented Reality is a powerful technology to support training inthe particular context of industrial service procedures, since instructions on how to perform the servicetasks can be directly linked to the machine parts to be processed. Various approaches exist, in whichthe trainee is guided step-by-step through the maintenance task, but these systems act more as guidingsystems than as training systems and focus only on the trainees sensorimotor capabilities. Due to theincreasing complexity of maintenance tasks, it is not sufficient to train the technicians execution ofthese tasks, but rather to train the underlying skills sensorimotor and cognitive that are necessaryfor an efficient acquisition and performance of new maintenance operations.All these facts lead to the need for efficient training systems for the training of maintenance andassembly skills which accelerate the technicians learning and acquisition of new maintenance procedures.Furthermore, these systems should improve the adjustment of the training process to newtraining scenarios and enable the reuse of existing training material that has proven its worth. In thisthesis a novel concept and platform for multimodal Augmented Reality-based training of maintenanceand assembly skills is presented. This concept includes the identification of necessary sub-skills, thetraining of the involved skills, and the design of a training program for the training of maintenance andassembly skills. Since procedural skills are considered as the most important skills for maintenanceand assembly operations, they are discussed in detail, as well as appropriate methods for improvingthem. We further show that the application of Augmented Reality technologies and the provisionof multimodal feedback and vibrotactile feedback in particular have a great potential to enhanceskill training in general. As a result, training strategies and specific accelerators for the training ofmaintenance and assembly skills in general and procedural skills in particular are elaborated. Here,accelerators are concrete methods used to implement the pursued training strategies.Furthermore, a novel concept for displaying location-dependent information in Augmented Realityenvironments is introduced, which can compensate tracking imprecisions. In this concept, the pointercontentmetaphor of annotating documents is transferred to Augmented Reality environments. As aresult, Adaptive Visual Aids are defined which consist of a tracking-dependent pointer object and atracking-independent content object, both providing an adaptable level and type of information. Thus, the guidance level of Augmented Reality overlays in AR-based training applications can be easilycontrolled. Adaptive Visual Aids can be used to substitute traditional Augmented Reality overlays (i.e.overlays in form of 3D animations), which highly suffer from tracking inaccuracies.The design of the multimodal AR-based training platform proposed in this thesis is not specific forthe training of maintenance and assembly skills, but is a general design approach for multimodal trainingplatforms. We further present an implementation of this platform based on the X3D ISO standardwhich provides features that are useful for the development of Augmented Reality environments. Thisstandard-based implementation increases the sustainability and portability of the platform.The implemented multimodal Augmented Reality-based platform for training of maintenance andassembly skills has been evaluated in industry and compared to traditional training methods. The resultsshow that the developed training platform and the pursued training strategies are very well suited forthe training of maintenance and assembly skills and enhance traditional training.With the presented framework we have overcome the problems sketched above. We are cheap interms of effort and costs for the training of maintenance and assembly skills and we improve its efficiencycompared with traditional training.
2011-12-07T00:00:00ZVisual Steering to Support Decision Making in VisdomWaser, Jürgenhttps://diglib.eg.org:443/handle/10.2312/82742017-03-16T11:29:50Z2011-06-15T00:00:00ZVisual Steering to Support Decision Making in Visdom
Waser, Jürgen
Computer simulation has become an ubiquitous tool to investigate the nature of systems. When steering a simulation, users modify parameters to study their impact on the simulation outcome. The ability to test alternative options provides the basis for interactive decision making. Increasingly complex simulations are characterized by an intricate interplay of many heterogeneous input and output parameters. A steering concept that combines simulation and visualization within a single, comprehensive system is largely missing. This thesis targets the basic components of a novel integrated steering system called Visdom to support the user in the decision making process. The proposed techniques enable users to examine alternative scenarios without the need for special simulation expertise. To accomplish this, we propose World Lines as a management strategy for multiple, related simulation runs. In a dedicated view, users create and navigate through many simulation runs. New decisions are included through the concept of branching. To account for uncertain knowledge about the input parameters, we provide the ability to cover full parameter distributions. Via multiple cursors, users navigate a system of multiple linked views through time and alternative scenarios. In this way, the system supports comparative visual analysis of many simulation runs. Since the steering process generates a huge amount of information, we employ the machine to support the user in the search for explanations inside the computed data. Visdom is built on top of a data-flow network to provide a high level of modularity. A decoupled meta-flow is in charge of transmitting parameter changes from World Lines to the affected dataflow nodes. To direct the user attention to the most relevant parts, we provide dynamic visualization inside the flow diagram. The usefulness of the presented approach is substantiated through case studies in the field of flood management. The Visdom application enables the design of a breach closure by dropping sandbags in a virtual environment.
2011-06-15T00:00:00Z