VCBM 2024: Eurographics Workshop on Visual Computing for Biology and Medicine

Permanent URI for this collection

Otto-von-Guericke University Magdeburg, Germany, on September 19/20, 2024.
Medical Visualization and Surgical Assistance
CBCTLiTS: A Synthetic, Paired CBCT/CT Dataset For Segmentation And Style Transfer
Maximilian Ernst Tschuchnig, Philipp Steininger, and Michael Gadermayr
Potential of Tetrahedral Markers for Infrared Pose Tracking in Surgical Navigation
Erik Immoor and Tom L. Koller
Algorithmic Integration and Quantification of Endoscopic and 3D TEE Images in Mitral Valve Surgery
Matthias Ivantsits, Markus Huellebrand, Lars Walczak, J. Welz, Dustin Greve, Isaac Wamala, Simon Suendermann, Jörg Kempfert, Volkmar Falk, and Anja Hennemuth
Image Processing and Machine Learning
Virtually Objective Quantification of in vitro Wound Healing Scratch Assays with the Segment Anything Model
Katja Löwenstein, Johanna Rehrl, Anja Schuster, and Michael Gadermayr
Exploring Drusen Type and Appearance using Interpretable GANs
Christian Muth, Olivier Morelle, Renata Georgia Raidou, Maximilian W.M. Wintergerst, Robert P. Finger, and Thomas Schultz
Workflow for AI-Supported Stenosis Prediction in X-Ray Coronary Angiography for SYNTAX Score Calculation
Antonia Popp, Alaa Abd El Al, Marie Hoffmann, Ann Laube, Jörg Kempfert, Anja Hennemuth, and Alexander Meyer
Immersive Visualization and Interaction
Development and Analysis of a Pipeline for Cardiac Ultrasound Simulation for Deep Learning Segmentation Methods
Marcel Bauer, Chiara Manini, Stefan Klemmer, Tom Meyer, Matthias Ivantsits, Lars Walczak, Anja Hennemuth, and Heiko Tzschätzsch
VISPER - Visualization System for Interactions between Proteins and Drugs for Exploratory Research
Daniel Dehncke, Vinzenz Fiebach, Lennart Kinzel, Knut Baumann, and Tim Kacprowski
CardioCoLab: Collaborative Learning of Embryonic Heart Anatomy in Mixed Reality
Danny Schott, Florian Heinrich, Matthias Kunz, Jonas Mandel, Anne Albrecht, Rüdiger Braun-Dullaeus, and Christian Hansen
Health Communication and Patient Reporting
Why, What, and How to Communicate Health Information Visually: Reflections on the Design Process of Narrative Medical Visualization
Sarah Mittenentzwei, Bernhard Preim, and Monique Meuschke
Leaving the Lab Setting: What We Can Learn About the Perception of Narrative Medical Visualizations from YouTube Comments
Sarah Mittenentzwei, Danish Murad, Bernhard Preim, and Monique Meuschke
The MoBa Pregnancy and Child Development Dashboard: A Design Study
Roxanne Ziman, Beatrice Budich, Marc Vaudel, and Laura Garrison

BibTeX (VCBM 2024: Eurographics Workshop on Visual Computing for Biology and Medicine)
@inproceedings{
10.2312:vcbm.20242021,
booktitle = {
Eurographics Workshop on Visual Computing for Biology and Medicine},
editor = {
Garrison, Laura
and
Jönsson, Daniel
}, title = {{
Eurographics Workshop on Visual Computing for Biology and Medicine: Short Papers Frontmatter}},
author = {
Garrison, Laura
and
Jönsson, Daniel
}, year = {
2023},
publisher = {
The Eurographics Association},
ISSN = {2070-5786},
ISBN = {978-3-03868-244-8},
DOI = {
10.2312/vcbm.20242021}
}
@inproceedings{
10.2312:vcbm.20241183,
booktitle = {
Eurographics Workshop on Visual Computing for Biology and Medicine},
editor = {
Garrison, Laura
and
Jönsson, Daniel
}, title = {{
CBCTLiTS: A Synthetic, Paired CBCT/CT Dataset For Segmentation And Style Transfer}},
author = {
Tschuchnig, Maximilian Ernst
and
Steininger, Philipp
and
Gadermayr, Michael
}, year = {
2024},
publisher = {
The Eurographics Association},
ISSN = {2070-5786},
ISBN = {978-3-03868-244-8},
DOI = {
10.2312/vcbm.20241183}
}
@inproceedings{
10.2312:vcbm.20241184,
booktitle = {
Eurographics Workshop on Visual Computing for Biology and Medicine},
editor = {
Garrison, Laura
and
Jönsson, Daniel
}, title = {{
Potential of Tetrahedral Markers for Infrared Pose Tracking in Surgical Navigation}},
author = {
Immoor, Erik
and
Koller, Tom L.
}, year = {
2024},
publisher = {
The Eurographics Association},
ISSN = {2070-5786},
ISBN = {978-3-03868-244-8},
DOI = {
10.2312/vcbm.20241184}
}
@inproceedings{
10.2312:vcbm.20241185,
booktitle = {
Eurographics Workshop on Visual Computing for Biology and Medicine},
editor = {
Garrison, Laura
and
Jönsson, Daniel
}, title = {{
Algorithmic Integration and Quantification of Endoscopic and 3D TEE Images in Mitral Valve Surgery}},
author = {
Ivantsits, Matthias
and
Huellebrand, Markus
and
Walczak, Lars
and
Welz, Juri
and
Greve, Dustin
and
Wamala, Isaac
and
Suendermann, Simon
and
Kempfert, Jörg
and
Falk, Volkmar
and
Hennemuth, Anja
}, year = {
2024},
publisher = {
The Eurographics Association},
ISSN = {2070-5786},
ISBN = {978-3-03868-244-8},
DOI = {
10.2312/vcbm.20241185}
}
@inproceedings{
10.2312:vcbm.20241186,
booktitle = {
Eurographics Workshop on Visual Computing for Biology and Medicine},
editor = {
Garrison, Laura
and
Jönsson, Daniel
}, title = {{
Virtually Objective Quantification of in vitro Wound Healing Scratch Assays with the Segment Anything Model}},
author = {
Löwenstein, Katja
and
Rehrl, Johanna
and
Schuster, Anja
and
Gadermayr, Michael
}, year = {
2024},
publisher = {
The Eurographics Association},
ISSN = {2070-5786},
ISBN = {978-3-03868-244-8},
DOI = {
10.2312/vcbm.20241186}
}
@inproceedings{
10.2312:vcbm.20241187,
booktitle = {
Eurographics Workshop on Visual Computing for Biology and Medicine},
editor = {
Garrison, Laura
and
Jönsson, Daniel
}, title = {{
Exploring Drusen Type and Appearance using Interpretable GANs}},
author = {
Muth, Christian
and
Morelle, Olivier
and
Raidou, Renata Georgia
and
Wintergerst, Maximilian W. M.
and
Finger, Robert P.
and
Schultz, Thomas
}, year = {
2024},
publisher = {
The Eurographics Association},
ISSN = {2070-5786},
ISBN = {978-3-03868-244-8},
DOI = {
10.2312/vcbm.20241187}
}
@inproceedings{
10.2312:vcbm.20241188,
booktitle = {
Eurographics Workshop on Visual Computing for Biology and Medicine},
editor = {
Garrison, Laura
and
Jönsson, Daniel
}, title = {{
Workflow for AI-Supported Stenosis Prediction in X-Ray Coronary Angiography for SYNTAX Score Calculation}},
author = {
Popp, Antonia
and
El Al, Alaa Abd
and
Hoffmann, Marie
and
Laube, Ann
and
Kempfert, Jörg
and
Hennemuth, Anja
and
Meyer, Alexander
}, year = {
2024},
publisher = {
The Eurographics Association},
ISSN = {2070-5786},
ISBN = {978-3-03868-244-8},
DOI = {
10.2312/vcbm.20241188}
}
@inproceedings{
10.2312:vcbm.20241189,
booktitle = {
Eurographics Workshop on Visual Computing for Biology and Medicine},
editor = {
Garrison, Laura
and
Jönsson, Daniel
}, title = {{
Development and Analysis of a Pipeline for Cardiac Ultrasound Simulation for Deep Learning Segmentation Methods}},
author = {
Bauer, Marcel
and
Manini, Chiara
and
Klemmer, Stefan
and
Meyer, Tom
and
Ivantsits, Matthias
and
Walczak, Lars
and
Hennemuth, Anja
and
Tzschätzsch, Heiko
}, year = {
2024},
publisher = {
The Eurographics Association},
ISSN = {2070-5786},
ISBN = {978-3-03868-244-8},
DOI = {
10.2312/vcbm.20241189}
}
@inproceedings{
10.2312:vcbm.20241190,
booktitle = {
Eurographics Workshop on Visual Computing for Biology and Medicine},
editor = {
Garrison, Laura
and
Jönsson, Daniel
}, title = {{
VISPER - Visualization System for Interactions between Proteins and Drugs for Exploratory Research}},
author = {
Dehncke, Daniel
and
Fiebach, Vinzenz
and
Kinzel, Lennart
and
Baumann, Knut
and
Kacprowski, Tim
}, year = {
2024},
publisher = {
The Eurographics Association},
ISSN = {2070-5786},
ISBN = {978-3-03868-244-8},
DOI = {
10.2312/vcbm.20241190}
}
@inproceedings{
10.2312:vcbm.20241191,
booktitle = {
Eurographics Workshop on Visual Computing for Biology and Medicine},
editor = {
Garrison, Laura
and
Jönsson, Daniel
}, title = {{
CardioCoLab: Collaborative Learning of Embryonic Heart Anatomy in Mixed Reality}},
author = {
Schott, Danny
and
Heinrich, Florian
and
Kunz, Matthias
and
Mandel, Jonas
and
Albrecht, Anne
and
Braun-Dullaeus, Rüdiger
and
Hansen, Christian
}, year = {
2024},
publisher = {
The Eurographics Association},
ISSN = {2070-5786},
ISBN = {978-3-03868-244-8},
DOI = {
10.2312/vcbm.20241191}
}
@inproceedings{
10.2312:vcbm.20241192,
booktitle = {
Eurographics Workshop on Visual Computing for Biology and Medicine},
editor = {
Garrison, Laura
and
Jönsson, Daniel
}, title = {{
Why, What, and How to Communicate Health Information Visually: Reflections on the Design Process of Narrative Medical Visualization}},
author = {
Mittenentzwei, Sarah
and
Preim, Bernhard
and
Meuschke, Monique
}, year = {
2024},
publisher = {
The Eurographics Association},
ISSN = {2070-5786},
ISBN = {978-3-03868-244-8},
DOI = {
10.2312/vcbm.20241192}
}
@inproceedings{
10.2312:vcbm.20241193,
booktitle = {
Eurographics Workshop on Visual Computing for Biology and Medicine},
editor = {
Garrison, Laura
and
Jönsson, Daniel
}, title = {{
Leaving the Lab Setting: What We Can Learn About the Perception of Narrative Medical Visualizations from YouTube Comments}},
author = {
Mittenentzwei, Sarah
and
Murad, Danish
and
Preim, Bernhard
and
Meuschke, Monique
}, year = {
2024},
publisher = {
The Eurographics Association},
ISSN = {2070-5786},
ISBN = {978-3-03868-244-8},
DOI = {
10.2312/vcbm.20241193}
}
@inproceedings{
10.2312:vcbm.20241194,
booktitle = {
Eurographics Workshop on Visual Computing for Biology and Medicine},
editor = {
Garrison, Laura
and
Jönsson, Daniel
}, title = {{
The MoBa Pregnancy and Child Development Dashboard: A Design Study}},
author = {
Ziman, Roxanne
and
Budich, Beatrice
and
Vaudel, Marc
and
Garrison, Laura
}, year = {
2024},
publisher = {
The Eurographics Association},
ISSN = {2070-5786},
ISBN = {978-3-03868-244-8},
DOI = {
10.2312/vcbm.20241194}
}

Browse

Recent Submissions

Now showing 1 - 13 of 13
  • Item
    Eurographics Workshop on Visual Computing for Biology and Medicine: Short Papers Frontmatter
    (The Eurographics Association, 2023) Garrison, Laura; Jönsson, Daniel; Garrison, Laura; Jönsson, Daniel
  • Item
    CBCTLiTS: A Synthetic, Paired CBCT/CT Dataset For Segmentation And Style Transfer
    (The Eurographics Association, 2024) Tschuchnig, Maximilian Ernst; Steininger, Philipp; Gadermayr, Michael; Garrison, Laura; Jönsson, Daniel
    Medical imaging is vital in computer assisted intervention. Particularly cone beam computed tomography (CBCT) with defacto real time and mobility capabilities plays an important role. However, CBCT images often suffer from artifacts, which pose challenges for accurate interpretation, motivating research in advanced algorithms for more effective use in clinical practice. In this work we present CBCTLiTS, a synthetically generated, labelled CBCT dataset for segmentation with paired and aligned, high quality computed tomography data. The CBCT data is provided in five levels of quality, reaching from a large number of projections with high visual quality and mild artifacts to a small number of projections with severe artifacts. This allows thorough investigations with the quality as a degree of freedom. We also provide baselines for several possible research scenarios like uni- and multimodal segmentation, multitask learning and style transfer followed by segmentation of relatively simple liver to complex liver tumor segmentation. CBCTLiTS is accesssible via https://www.kaggle.com/datasets/ maximiliantschuchnig/cbct-liver-and-liver-tumor-segmentation-train-data.
  • Item
    Potential of Tetrahedral Markers for Infrared Pose Tracking in Surgical Navigation
    (The Eurographics Association, 2024) Immoor, Erik; Koller, Tom L.; Garrison, Laura; Jönsson, Daniel
    Optical tracking systems predominantly rely on spherical retro-reflective markers, requiring a minimum of three fiducials to achieve a full six-degree-of-freedom (6D) pose estimation. Despite the potential benefits of a single non-spherical fiducial for 6D pose estimation, this approach has received limited attention in the literature. This study investigates the feasibility of nonspherical retro-reflective markers, specifically tetrahedral markers, as alternatives to spherical fiducials. Using Blender for simulation and digital post-processing, stereo images of both spherical and tetrahedral markers were generated. The standard marker tracking is adapted to use the tetrahedrons corners instead of sphere centers. Results indicate that while spherical markers provide slightly more precise tracking in the simulated scenario, tetrahedral markers offer advantages in practical applications, such as an enhanced range of motion. These findings suggest that non-spherical markers warrant further exploration for their potential to improve optical tracking systems in real-world settings.
  • Item
    Algorithmic Integration and Quantification of Endoscopic and 3D TEE Images in Mitral Valve Surgery
    (The Eurographics Association, 2024) Ivantsits, Matthias; Huellebrand, Markus; Walczak, Lars; Welz, Juri; Greve, Dustin; Wamala, Isaac; Suendermann, Simon; Kempfert, Jörg; Falk, Volkmar; Hennemuth, Anja; Garrison, Laura; Jönsson, Daniel
    Minimally invasive surgery is the state-of-the-art approach for repairing the mitral valve, which controls the blood flow into the left heart chamber. The surgeons rely on camera and sensor technologies to support visualization, navigation, and measurement. As patients are connected to the cardio-pulmonary bypass, the anatomy is severely deformed by the altered pressure conditions. We developed a technique that combines stereo-endoscopic video with three-dimensional transesophageal echocardiography (3D TEE) to improve anatomic visualization and measurement accuracy during mitral valve repairs. Our methodology includes stereo camera calibration, image segmentation, and 3D model reconstruction. Anatomical landmarks are used to align the imaging modalities. This approach allows the visualization of pre-operatively determined mitral valve properties, e.g., overlaying heat maps in stereo endoscopic data. Our validation results showed high precision and accuracy within an error range of 0.5 ± 0.1 mm. The effectiveness of the heatmap visualization in complex prolapse cases varied. Integrating stereoscopic and 3D TEE promises greater precision in mitral valve repairs. In the future, this approach can also be used to visualize local tissue properties or the optimal locations of implants.
  • Item
    Virtually Objective Quantification of in vitro Wound Healing Scratch Assays with the Segment Anything Model
    (The Eurographics Association, 2024) Löwenstein, Katja; Rehrl, Johanna; Schuster, Anja; Gadermayr, Michael; Garrison, Laura; Jönsson, Daniel
    The in vitro scratch assay is a widely used assay in cell biology to assess the rate of wound closure related to a variety of therapeutic interventions. While manual measurement is subjective and vulnerable to intra- and interobserver variability, computer-based tools are theoretically objective, but in practice often contain parameters which are manually adjusted (individually per image or data set) and thereby provide a source for subjectivity. Modern deep learning approaches typically require large annotated training data which complicates instant applicability. In this paper, we make use of the segment anything model, a deep foundation model based on interactive point-prompts, which enables class-agnostic segmentation without tuning the network's parameters based on domain specific training data. The proposed method clearly outperformed a semi-objective baseline method that required manual inspection and, if necessary, adjustment of parameters per image. Even though the point prompts of the proposed approach are theoretically also a source for subjectivity, results attested very low intra- and interobserver variability, even compared to manual segmentation of domain experts.
  • Item
    Exploring Drusen Type and Appearance using Interpretable GANs
    (The Eurographics Association, 2024) Muth, Christian; Morelle, Olivier; Raidou, Renata Georgia; Wintergerst, Maximilian W. M.; Finger, Robert P.; Schultz, Thomas; Garrison, Laura; Jönsson, Daniel
    We propose an algorithmic pipeline that uses interpretable Generative Adversarial Networks (GANs) to visualize the variability of the visual appearance of drusen in Optical Coherence Tomography (OCT). Drusen are accumulations of extracellular debris between Bruch's membrane and the retinal pigment epithelium of the eye. They are a hallmark of age-related macular degeneration (AMD)-the most common cause of vision loss in the elderly. Imaging the morphology of drusen with OCT reveals different subtypes, which might have different relevance for disease severity and the risk of progression. We compare two GAN architectures and three recently proposed methods for the unsupervised discovery of interpretable paths in their latent space with respect to their ability to visualize natural variations in drusen appearance. We also introduce a color code that indicates generated images that extrapolate beyond the training data and should, therefore, be interpreted with caution. Our results suggest that, even when trained on cross-sectional data, GANs can recover smooth and anatomically plausible variations of drusen that are in agreement with changes over time that are known from longitudinal observations.
  • Item
    Workflow for AI-Supported Stenosis Prediction in X-Ray Coronary Angiography for SYNTAX Score Calculation
    (The Eurographics Association, 2024) Popp, Antonia; El Al, Alaa Abd; Hoffmann, Marie; Laube, Ann; Kempfert, Jörg; Hennemuth, Anja; Meyer, Alexander; Garrison, Laura; Jönsson, Daniel
    X-ray coronary angiography is the primary imaging modality for evaluating coronary artery disease. The visual assessment of angiography videos in clinical routines is time-consuming, requires expert experience and lacks standardization. This complicates the calculation of the SYNTAX score, a recommended instrument for therapy decision making. In this work we propose an end-to-end pipeline for segment-wise stenosis prediction in multi-view angiography videos to facilitate the calculation of the SYNTAX score. While recent approaches mainly focus on stenosis detection on frame- or video-level, our method is developed and evaluated for stenosis prediction on patient-level. The pipeline is composed as follows: (1) Selection of frames showing arteries filled with contrast medium using a convolutional neural network, (2) Stenosis detection and segment labelling on selected frames using a region-based convolutional neural network for object detection, (3) Linkage of detected regions showing the same stenosis by tracking the optical flow of the detections in the angiography video, (4) Segment assignment to the detected and tracked stenosis to predict stenotic segments on patient-level. The workflow is adjusted and evaluated using the image data and diagnostic annotations of 219 patients with multi-vessel coronary artery disease from the German Heart Center of the Charité University Hospital (DHZC), Berlin. To fine-tune the models, we used manually flagged frames for the frame classification model and bounding box annotations provided by a cardiac expert for the stenosis detection model. For the segment-wise prediction of all patients, we achieved a total sensitivity of 56.41, specificity of 85.88, precision of 52.81 and F1 score of 54.55 with varying results for the 25 coronary segments. The established workflow can facilitate visual assessment of CAD in angiography videos and increase accuracy and precision in clinical diagnostics.
  • Item
    Development and Analysis of a Pipeline for Cardiac Ultrasound Simulation for Deep Learning Segmentation Methods
    (The Eurographics Association, 2024) Bauer, Marcel; Manini, Chiara; Klemmer, Stefan; Meyer, Tom; Ivantsits, Matthias; Walczak, Lars; Hennemuth, Anja; Tzschätzsch, Heiko; Garrison, Laura; Jönsson, Daniel
    Accurate and efficient segmentation of anatomical structures in medical images, e.g. ultrasound images, is crucial for diagnosis. Deep Learning methods can provide automatic reproducible segmentation, and simulation of medical images with their intrinsic ground truth could help to develop and tune these methods. We introduce a simulation pipeline for the example of mitral valve segmentation in Transesophageal Echocardiography (TEE) images including different valve opening states. As anatomical ground truth, we used a CT based patient phantom with simulated mitral valve closure. For each region within the phantom, scatter intensities and reflections between tissue boundaries were set, and ultrasound images were simulated with incorporation of attenuation and noise. To further improve realism of the simulated images a speckle reduction filter was used. The adjustments applied to improve realism were assessed by testing the segmentation performance (including Dice score) of a deep learning method trained on real TEE data. The initial Dice score for the simulation was 31 %. This value increased with image postprocessing (37 %), exclusion of surrounding cardiac structures (45 %) and the combination of both (46 %). In comparison, the initial Dices score for real TEE was 72 %. On both simulated and real TEE images, the deep learning method performed better on fully closed valve states (42 % and 77 %) than on fully open valves (27 % and 66 %). This work introduced a novel pipeline for the realistic simulation of TEE images with different valve opening states. Our analysis demonstrated feasibility of the proposed pipeline and highlighted the importance of accurate and dynamic valve phantoms, comprehensive simulations and specific post-processing for the simulation of realistic TEE images. In the future, with further improvements of the simulation, we will evaluate the pipeline for the training of Deep Learning methods on simulated data for the application on real data.
  • Item
    VISPER - Visualization System for Interactions between Proteins and Drugs for Exploratory Research
    (The Eurographics Association, 2024) Dehncke, Daniel; Fiebach, Vinzenz; Kinzel, Lennart; Baumann, Knut; Kacprowski, Tim; Garrison, Laura; Jönsson, Daniel
    VISPER is a web-based application that enables users to interactively explore and analyze drug-protein associations. Its uniqueness lies in the dataset for which it has been specifically designed. Until now, most biomarkers for cancer vulnerabilities have primarily relied on genomic and transcriptomic measurements. A recently published study created a comprehensive pan-cancer proteomic map of human cancer cell lines, involving the application of 625 drugs to these cell lines. From these data, proteomic responses to the drug treatment across different cell lines can be derived, providing an extensive resource for a better understanding of drug mechanisms. To facilitate the analysis of this extensive dataset, we developed VISPER, a visualization tool specifically tailored to explore the ProCan dataset, enabling easy exploration of the relationships between proteins, drugs, and cell lines through a network graph representation. The graphical representation is complemented by a wide range of filter options, different representations, and integration of existing online databases for improved biological classification. Furthermore, the web application provides a clear overview of the similarity of drugs based on their protein associations. VISPER thus represents a promising addition to established systems biology software tools. Availability and implementation: VISPER is available open-source on GitHub (https://github.com/scibiome/VISPER) or as a Docker image (https://hub.docker.com/r/thegoldenphoenix/VISPER).
  • Item
    CardioCoLab: Collaborative Learning of Embryonic Heart Anatomy in Mixed Reality
    (The Eurographics Association, 2024) Schott, Danny; Heinrich, Florian; Kunz, Matthias; Mandel, Jonas; Albrecht, Anne; Braun-Dullaeus, Rüdiger; Hansen, Christian; Garrison, Laura; Jönsson, Daniel
    The complexity of embryonic heart development presents significant challenges for medical education, particularly in illustrating dynamic morphological changes over short time periods. Traditional teaching methods, such as 2D textbook illustrations and static models, are often insufficient for conveying these intricate processes. To address this gap, we developed a multi-user Mixed Reality (MR) system designed to enhance collaborative learning and interaction with virtual heart models. Building on previous research, we identified the needs of both students and teachers, implementing various interaction and visualization features iteratively. An evaluation with teachers and students (N = 12) demonstrated the system's effectiveness in improving engagement and understanding of embryonic heart development. The study highlights the potential of MR in medical seminar settings as a valuable addition to medical education by enhancing traditional learning methods.
  • Item
    Why, What, and How to Communicate Health Information Visually: Reflections on the Design Process of Narrative Medical Visualization
    (The Eurographics Association, 2024) Mittenentzwei, Sarah; Preim, Bernhard; Meuschke, Monique; Garrison, Laura; Jönsson, Daniel
    Narrative visualization is an effective technique to convey information to a lay audience in an engaging, memorable, and persuasive manner. In the medical domain, we experienced that narrative medical visualizations meet high interest from clinicians and epidemiologists as storytelling is a promising approach to conveying complex medical topics in the context of patient education and public health by utilizing medical data. These endeavors from the computer science domain are mirrored by the interdisciplinary research topic of health communication. With this work, we reflect on our past experiences by (1) showing where narrative medical visualization is applicable to solve problems clinicians face in their work, (2) summarizing all findings within a story design process, describing the key points in creating a story and how they relate to each other, and (3) highlighting parallels and insights from health communication research that can improve future narrative medical visualizations. In doing so, we aim to provide the research community with a toolkit to support the design of narrative medical visualizations.
  • Item
    Leaving the Lab Setting: What We Can Learn About the Perception of Narrative Medical Visualizations from YouTube Comments
    (The Eurographics Association, 2024) Mittenentzwei, Sarah; Murad, Danish; Preim, Bernhard; Meuschke, Monique; Garrison, Laura; Jönsson, Daniel
    The general public is highly interested in medical information, particularly educational media about diseases, healthy biological processes such as pregnancy, and surgical procedures. Efforts to develop educational materials using data-driven approaches like narrative visualization exist, but studies are often performed in lab settings. Since there are few public sources for visualizations of medical image data, YouTube videos, which often contain 3D medical visualizations, are an important reference. We aim to better understand the user base of these videos. Therefore, we curated a dataset of 76 videos featuring medical 3D visualizations. We analyzed 14,550 comments across all videos using manual review and machine learning techniques, including natural language processing for sentiment and emotion analysis of user comments. While few comments directly link visual attributes or design choices to user sentiment, insights into users' motivation and opinions of specific design choices have emerged.
  • Item
    The MoBa Pregnancy and Child Development Dashboard: A Design Study
    (The Eurographics Association, 2024) Ziman, Roxanne; Budich, Beatrice; Vaudel, Marc; Garrison, Laura; Garrison, Laura; Jönsson, Daniel
    Visual analytics dashboards enable exploration of complex medical and genetic data to uncover underlying patterns and possible relationships between conditions and outcomes. In this interdisciplinary design study, we present a characterization of the domain and expert tasks for the exploratory analysis for a rare maternal disease in the context of the longitudinal Norwegian Mother, Father, and Child (MoBa) Cohort Study. We furthermore present a novel prototype dashboard, developed through an iterative design process and using the Python-based Streamlit App [TTK18] and Vega-Altair [VGH*18] visualization library, to allow domain experts (e.g., bioinformaticians, clinicians, statisticians) to explore possible correlations between women's health during pregnancy and child development outcomes. In conclusion, we reflect on several challenges and research opportunities for not only furthering this approach, but in visualization more broadly for large, complex, and sensitive patient datasets to support clinical research.