Botchen, Ralf P.Chen, MinWeiskopf, DanielErtl, ThomasRaghu Machiraju and Torsten Moeller2014-01-292014-01-2920063-905673-41-X1727-8376https://doi.org/10.2312/VG/VG06/047-054GPU-assisted multi-field rendering provides a means of generating effective video volume visualization that can convey both the objects in a spatiotemporal domain as well as the motion status of these objects. In this paper, we present a technical framework that enables combined volume and flow visualization of a video to be synthesized using GPU-based techniques. A bricking-based volume rendering method is deployed for handling large video datasets in a scalable manner, which is particularly useful for synthesizing a dynamic visualization of a video stream. We have implemented a number of image processing filters, and in particular, we employ an optical flow filter for estimating motion flows in a video. We have devised mechanisms for combining volume objects in a scalar field with glyph and streamline geometry from an optical flow. We demonstrate the effectiveness of our approach with example visualizations constructed from two benchmarking problems in computer vision.Categories and Subject Descriptors (according to ACMCCS): I.3.3 [Computer Graphics]: Picture / Image Generation I.3.6 [Computer Graphics]: Methodology and Techniques I.3.m [Computer Graphics]: Video VisualizationGPU-assisted Multi-field Video Volume Visualization