Noonan, TomCampoalegre, LazaroDingliana, JohnDavid Bommes and Tobias Ritschel and Thomas Schultz2015-10-072015-10-072015978-3-905674-95-8https://doi.org/10.2312/vmv.20151255This paper introduces an empirical, perceptually-based method which exploits the temporal coherence in consecutive frames to reduce the CPU-GPU traffic size during real-time visualization of time-varying volume data. In this new scheme, a multi-threaded CPU mechanism simulates GPU pre-rendering functions to characterize the local behaviour of the volume. These functions exploit the temporal coherence in the data to reduce the sending of complete per frame datasets to the GPU. These predictive computations are designed to be simple enough to be run in parallel on the CPU while improving the general performance of GPU rendering. Tests performed provide evidence that we are able to reduce considerably the texture size transferred at each frame without losing visual quality while maintaining performance compared to the sending of entire frames to the GPU. The proposed framework is designed to be scalable to Client/Server network based implementations to deal with multi-user systems.I.3.3 [Computer Graphics]Timevarying dataParallel Processing Volume RenderingTemporal Coherence Predictor for Time Varying Volume Data Based on Perceptual Functions10.2312/vmv.2015125533-40