4 results
Search Results
Now showing 1 - 4 of 4
Item BI‐LAVA: Biocuration With Hierarchical Image Labelling Through Active Learning and Visual Analytics(Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Trelles, Juan; Wentzel, Andrew; Berrios, William; Shatkay, Hagit; Marai, G. ElisabetaIn the biomedical domain, taxonomies organize the acquisition modalities of scientific images in hierarchical structures. Such taxonomies leverage large sets of correct image labels and provide essential information about the importance of a scientific publication, which could then be used in biocuration tasks. However, the hierarchical nature of the labels, the overhead of processing images, the absence or incompleteness of labelled data and the expertise required to label this type of data impede the creation of useful datasets for biocuration. From a multi‐year collaboration with biocurators and text‐mining researchers, we derive an iterative visual analytics and active learning (AL) strategy to address these challenges. We implement this strategy in a system called BI‐LAVA—Biocuration with Hierarchical Image Labelling through Active Learning and Visual Analytics. BI‐LAVA leverages a small set of image labels, a hierarchical set of image classifiers and AL to help model builders deal with incomplete ground‐truth labels, target a hierarchical taxonomy of image modalities and classify a large pool of unlabelled images. BI‐LAVA's front end uses custom encodings to represent data distributions, taxonomies, image projections and neighbourhoods of image thumbnails, which help model builders explore an unfamiliar image dataset and taxonomy and correct and generate labels. An evaluation with machine learning practitioners shows that our mixed human–machine approach successfully supports domain experts in understanding the characteristics of classes within the taxonomy, as well as validating and improving data quality in labelled and unlabelled collections.Item ConAn: Measuring and Evaluating User Confidence in Visual Data Analysis Under Uncertainty(Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Musleh, M.; Ceneda, D.; Ehlers, H.; Raidou, R. G.User confidence plays an important role in guided visual data analysis scenarios, especially when uncertainty is involved in the analytical process. However, measuring confidence in practical scenarios remains an open challenge, as previous work relies primarily on self‐reporting methods. In this work, we propose a quantitative approach to measure user confidence—as opposed to trust—in an analytical scenario. We do so by exploiting the respective user interaction provenance graph and examining the impact of guidance using a set of network metrics. We assess the usefulness of our proposed metrics through a user study that correlates results obtained from self‐reported confidence assessments and our metrics—both with and without guidance. The results suggest that our metrics improve the evaluation of user confidence compared to available approaches. In particular, we found a correlation between self‐reported confidence and some of the proposed provenance network metrics. The quantitative results, though, do not show a statistically significant impact of the guidance on user confidence. An additional descriptive analysis suggests that guidance could impact users' confidence and that the qualitative analysis of the provenance network topology can provide a comprehensive view of changes in user confidence. Our results indicate that our proposed metrics and the provenance network graph representation support the evaluation of user confidence and, subsequently, the effective development of guidance in VA.Item Natural Language Generation for Visualizations: State of the Art, Challenges and Future Directions(Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Hoque, E.; Islam, M. SaidulNatural language and visualization are two complementary modalities of human communication that play a crucial role in conveying information effectively. While visualizations help people discover trends, patterns and anomalies in data, natural language descriptions help explain these insights. Thus, combining text with visualizations is a prevalent technique for effectively delivering the core message of the data. Given the rise of natural language generation (NLG), there is a growing interest in automatically creating natural language descriptions for visualizations, which can be used as chart captions, answering questions about charts or telling data‐driven stories. In this survey, we systematically review the state of the art on NLG for visualizations and introduce a taxonomy of the problem. The NLG tasks fall within the domain of natural language interfaces (NLIs) for visualization, an area that has garnered significant attention from both the research community and industry. To narrow down the scope of the survey, we primarily concentrate on the research works that focus on text generation for visualizations. To characterize the NLG problem and the design space of proposed solutions, we pose five Wh‐questions, why and how NLG tasks are performed for visualizations, what the task inputs and outputs are, as well as where and when the generated texts are integrated with visualizations. We categorize the solutions used in the surveyed papers based on these ‘five Wh‐questions’. Finally, we discuss the key challenges and potential avenues for future research in this domain.Item Detecting, Interpreting and Modifying the Heterogeneous Causal Network in Multi‐Source Event Sequences(Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Xu, Shaobin; Sun, MinghuiUncovering causal relations from event sequences to guide decision‐making has become an essential task across various domains. Unfortunately, this task remains a challenge because real‐world event sequences are usually collected from multiple sources. Most existing works are specifically designed for homogeneous causal analysis between events from a single source, without considering cross‐source causality. In this work, we propose a heterogeneous causal analysis algorithm to detect the heterogeneous causal network between high‐level events in multi‐source event sequences while preserving the causal semantic relationships between diverse data sources. Additionally, the flexibility of our algorithm allows to incorporate high‐level event similarity into learning model and provides a fuzzy modification mechanism. Based on the algorithm, we further propose a visual analytics framework that supports interpreting the causal network at three granularities and offers a multi‐granularity modification mechanism to incorporate user feedback efficiently. We evaluate the accuracy of our algorithm through an experimental study, illustrate the usefulness of our system through a case study, and demonstrate the efficiency of our modification mechanisms through a user study.