Browsing by Author "Lee, Bongshin"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Canis: A High-Level Language for Data-Driven Chart Animations(The Eurographics Association and John Wiley & Sons Ltd., 2020) Ge, Tong; Zhao, Yue; Lee, Bongshin; Ren, Donghao; Chen, Baoquan; Wang, Yunhai; Viola, Ivan and Gleicher, Michael and Landesberger von Antburg, TatianaIn this paper, we introduce Canis, a high-level domain-specific language that enables declarative specifications of data-driven chart animations. By leveraging data-enriched SVG charts, its grammar of animations can be applied to the charts created by existing chart construction tools. With Canis, designers can select marks from the charts, partition the selected marks into mark units based on data attributes, and apply animation effects to the mark units, with the control of when the effects start. The Canis compiler automatically synthesizes the Lottie animation JSON files [Aira], which can be rendered natively across multiple platforms. To demonstrate Canis' expressiveness, we present a wide range of chart animations. We also evaluate its scalability by showing the effectiveness of our compiler in reducing the output specification size and comparing its performance on different platforms against D3.Item Investigating the Role and Interplay of Narrations and Animations in Data Videos(The Eurographics Association and John Wiley & Sons Ltd., 2022) Cheng, Hao; Wang, Junhong; Wang, Yun; Lee, Bongshin; Zhang, Haidong; Zhang, Dongmei; Borgo, Rita; Marai, G. Elisabeta; Schreck, TobiasCombining data visualizations, animations, and audio narrations, data videos can increase viewer engagement and effectively communicate data stories. Due to their increasing popularity, data videos have gained growing attention from the visualization research community. However, recent research on data videos has focused on animations, lacking an understanding of narrations. In this work, we study how data videos use narrations and animations to convey information effectively. We conduct a qualitative analysis on 426 clips with visualizations extracted from 60 data videos collected from a variety of media outlets, covering a diverse array of topics. We manually label 816 sentences with 1226 semantic labels and record the composition of 2553 animations through an open coding process. We also analyze how narrations and animations coordinate with each other by assigning links between semantic labels and animations. With 937 (76.4%) semantic labels and 2503 (98.0%) animations linked, we identify four types of narration-animation relationships in the collected clips. Drawing from the findings, we discuss study implications and future research opportunities of data videos.Item Orchard: Exploring Multivariate Heterogeneous Networks on Mobile Phones(The Eurographics Association and John Wiley & Sons Ltd., 2020) Eichmann, Philipp; Edge, Darren; Evans, Nathan; Lee, Bongshin; Brehmer, Matthew; White, Christopher; Viola, Ivan and Gleicher, Michael and Landesberger von Antburg, TatianaPeople are becoming increasingly sophisticated in their ability to navigate information spaces using search, hyperlinks, and visualization. But, mobile phones preclude the use of multiple coordinated views that have proven effective in the desktop environment (e.g., for business intelligence or visual analytics). In this work, we propose to model information as multivariate heterogeneous networks to enable greater analytic expression for a range of sensemaking tasks while suggesting a new, list-based paradigm with gestural navigation of structured information spaces on mobile phones. We also present a mobile application, called Orchard, which combines ideas from both faceted search and interactive network exploration in a visual query language to allow users to collect facets of interest during exploratory navigation. Our study showed that users could collect and combine these facets with Orchard, specifying network queries and projections that would only have been possible previously using complex data tools or custom data science.