Show simple item record

dc.contributor.authorGao, Jinzhuen_US
dc.contributor.authorLiu, Huadongen_US
dc.contributor.authorHuang, Jianen_US
dc.contributor.authorBeck, Micahen_US
dc.contributor.authorWu, Qishien_US
dc.contributor.authorMoore, Terryen_US
dc.contributor.authorKohl, Jamesen_US
dc.contributor.editorJean M. Favre and Kwan-Liu Maen_US
dc.date.accessioned2014-01-26T16:43:25Z
dc.date.available2014-01-26T16:43:25Z
dc.date.issued2008en_US
dc.identifier.isbn978-3-905674-04-0en_US
dc.identifier.issn1727-348Xen_US
dc.identifier.urihttp://dx.doi.org/10.2312/EGPGV/EGPGV08/065-072en_US
dc.description.abstractIt is often desirable or necessary to perform scientific visualization in geographically remote locations, away from the centralized data storage systems that hold massive amounts of scientific results. The larger such scientific datasets are, the less practical it is to move these datasets to remote locations for collaborators. In such scenarios, efficient remote visualization solutions can be crucial. Yet the use of distributed or heterogeneous computing resources raises several challenges for large-scale data visualization. Algorithms must be robust and incorporate advanced load balancing and scheduling techniques. In this paper, we propose a time-critical remote visualization system that can be deployed over distributed and heterogeneous computing resources. We introduce an "importance" metric to measure the need for processing each data partition based on its degree of contribution to the final visual image. Factors contributing to this metric include specific application requirements, value distributions inside the data partition, and viewing parameters. We incorporate "visibility" in our measurement as well so that empty or invisible blocks will not be processed. Guided by the data blocks' importance values, our dynamic scheduling scheme determines the rendering priority for each visible block. That is, more important blocks will be rendered first. In time-critical scenarios, our scheduling algorithm also dynamically reduces the level-of-detail for the less important regions so that visualization can be finished in a user-specified time limit with highest possible image quality. This system enables interactive sharing of visualization results. To evaluate the performance of this system, we present a case study using a 250 Gigabyte dataset on 170 distributed processors.en_US
dc.publisherThe Eurographics Associationen_US
dc.subjectCategories and Subject Descriptors (according to ACM CCS): I.3.2 [Graphics Systems]: Distributed/network graphics I.3.6 [Methodology and Techniques]: Graphics data structures and data typesen_US
dc.titleTime-Critical Distributed Visualization with Fault Toleranceen_US
dc.description.seriesinformationEurographics Symposium on Parallel Graphics and Visualizationen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record