Volume 39 (2020)https://diglib.eg.org:443/handle/10.2312/26328792024-03-30T00:37:33Z2024-03-30T00:37:33ZVisual Analytics in Dental AestheticsAmirkhanov, AleksandrBernhard, MatthiasKarimov, AlexeyStiller, SabineGeier, AndreasGröller, EduardMistelbauer, Gabrielhttps://diglib.eg.org:443/handle/10.1111/cgf141742022-03-29T06:24:14Z2020-01-01T00:00:00ZVisual Analytics in Dental Aesthetics
Amirkhanov, Aleksandr; Bernhard, Matthias; Karimov, Alexey; Stiller, Sabine; Geier, Andreas; Gröller, Eduard; Mistelbauer, Gabriel
Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
Dental healthcare increasingly employs computer-aided design software, to provide patients with high-quality dental prosthetic devices. In modern dental reconstruction, dental technicians address the unique anatomy of each patient individually, by capturing the dental impression and measuring the mandibular movements. Subsequently, dental technicians design a custom denture that fits the patient from a functional point of view. The current workflow does not include a systematic analysis of aesthetics, and dental technicians rely only on an aesthetically pleasing mock-up that they discuss with the patient, and on their experience. Therefore, the final denture aesthetics remain unknown until the dental technicians incorporate the denture into the patient. In this work, we present a solution that integrates aesthetics analysis into the functional workflow of dental technicians. Our solution uses a video recording of the patient, to preview the denture design at any stage of the denture design process. We present a teeth pose estimation technique that enables denture preview and a set of linked visualizations that support dental technicians in the aesthetic design of dentures. These visualizations assist dental technicians in choosing the most aesthetically fitting preset from a library of dentures, in identifying the suitable denture size, and in adjusting the denture position. We demonstrate the utility of our system with four use cases, explored by a dental technician. Also, we performed a quantitative evaluation for teeth pose estimation, and an informal usability evaluation, with positive outcomes concerning the integration of aesthetics analysis into the functional workflow.
2020-01-01T00:00:00ZSlice and Dice: A Physicalization Workflow for Anatomical EdutainmentRaidou, Renata GeorgiaGröller, EduardWu, Hsiang-Yunhttps://diglib.eg.org:443/handle/10.1111/cgf141732022-03-29T06:22:20Z2020-01-01T00:00:00ZSlice and Dice: A Physicalization Workflow for Anatomical Edutainment
Raidou, Renata Georgia; Gröller, Eduard; Wu, Hsiang-Yun
Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
During the last decades, anatomy has become an interesting topic in education-even for laymen or schoolchildren. As medical imaging techniques become increasingly sophisticated, virtual anatomical education applications have emerged. Still, anatomical models are often preferred, as they facilitate 3D localization of anatomical structures. Recently, data physicalizations (i.e., physical visualizations) have proven to be effective and engaging-sometimes, even more than their virtual counterparts. So far, medical data physicalizations involve mainly 3D printing, which is still expensive and cumbersome. We investigate alternative forms of physicalizations, which use readily available technologies (home printers) and inexpensive materials (paper or semi-transparent films) to generate crafts for anatomical edutainment. To the best of our knowledge, this is the first computergenerated crafting approach within an anatomical edutainment context. Our approach follows a cost-effective, simple, and easy-to-employ workflow, resulting in assemblable data sculptures (i.e., semi-transparent sliceforms). It primarily supports volumetric data (such as CT or MRI), but mesh data can also be imported. An octree slices the imported volume and an optimization step simplifies the slice configuration, proposing the optimal order for easy assembly. A packing algorithm places the resulting slices with their labels, annotations, and assembly instructions on a paper or transparent film of user-selected size, to be printed, assembled into a sliceform, and explored. We conducted two user studies to assess our approach, demonstrating that it is an initial positive step towards the successful creation of interactive and engaging anatomical physicalizations.
2020-01-01T00:00:00ZRadEx: Integrated Visual Exploration of Multiparametric Studies for Radiomic Tumor ProfilingMörth, EricWagner-Larsen, KariHodneland, ErlendKrakstad, CamillaHaldorsen, Ingfrid S.Bruckner, StefanSmit, Noeska N.https://diglib.eg.org:443/handle/10.1111/cgf141722022-03-29T06:22:26Z2020-01-01T00:00:00ZRadEx: Integrated Visual Exploration of Multiparametric Studies for Radiomic Tumor Profiling
Mörth, Eric; Wagner-Larsen, Kari; Hodneland, Erlend; Krakstad, Camilla; Haldorsen, Ingfrid S.; Bruckner, Stefan; Smit, Noeska N.
Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
Better understanding of the complex processes driving tumor growth and metastases is critical for developing targeted treatment strategies in cancer. Radiomics extracts large amounts of features from medical images which enables radiomic tumor profiling in combination with clinical markers. However, analyzing complex imaging data in combination with clinical data is not trivial and supporting tools aiding in these exploratory analyses are presently missing. In this paper, we present an approach that aims to enable the analysis of multiparametric medical imaging data in combination with numerical, ordinal, and categorical clinical parameters to validate established and unravel novel biomarkers. We propose a hybrid approach where dimensionality reduction to a single axis is combined with multiple linked views allowing clinical experts to formulate hypotheses based on all available imaging data and clinical parameters. This may help to reveal novel tumor characteristics in relation to molecular targets for treatment, thus providing better tools for enabling more personalized targeted treatment strategies. To confirm the utility of our approach, we closely collaborate with experts from the field of gynecological cancer imaging and conducted an evaluation with six experts in this field.
2020-01-01T00:00:00ZColorization of Line Drawings with Empty PupilsAkita, KentaMorimoto, YukiTsuruno, Reijihttps://diglib.eg.org:443/handle/10.1111/cgf141712022-03-29T06:22:57Z2020-01-01T00:00:00ZColorization of Line Drawings with Empty Pupils
Akita, Kenta; Morimoto, Yuki; Tsuruno, Reiji
Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-Lue
Many studies have recently applied deep learning to the automatic colorization of line drawings. However, it is difficult to paint empty pupils using existing methods because the convolutional neural network are trained with pupils that have edges, which are generated from color images using image processing. Most actual line drawings have empty pupils that artists must paint in. In this paper, we propose a novel network model that transfers the pupil details in a reference color image to input line drawings with empty pupils. We also propose a method for accurately and automatically colorizing eyes. In this method, eye patches are extracted from a reference color image and automatically added to an input line drawing as color hints using our pupil position estimation network.
2020-01-01T00:00:00Z