Hybrid Forward Resampling and Volume Rendering

dc.contributor.authorYuan, Xiaoruen_US
dc.contributor.authorNguyen, Minh X.en_US
dc.contributor.authorXu, Huien_US
dc.contributor.authorChen, Baoquanen_US
dc.contributor.editorI. Fujishiro and K. Mueller and A. Kaufmanen_US
dc.date.accessioned2014-01-29T17:38:36Z
dc.date.available2014-01-29T17:38:36Z
dc.date.issued2003en_US
dc.description.abstractThe transforming and rendering of discrete objects, such as traditional images (with or without depths) and volumes, can be considered as resampling problem - objects are reconstructed, transformed, filtered, and finally sampled on the screen grids. In resampling practices, discrete samples (pixels, voxels) can be considered either as infinitesimal sample points (simply called points) or samples of a certain size (splats). Resampling can also be done either forwards or backwards in either the source domain or the target domain. In this paper, we present a framework that features hybrid forward resampling for discrete rendering. Specifically, we apply this framework to enhance volumetric splatting. In this approach, minified voxels are taken simply as points filtered in screen space; while magnified voxels are taken as spherical splats. In addition, we develop two techniques for performing accurate and efficient perspective splatting. The first one is to efficiently compute the 2D elliptical geometry of perspectively projected splats; the second one is to achieve accurate perspective reconstruction filter. The results of our experiments demonstrate both the effectiveness of antialiasing and the efficiency of rendering using this approach.en_US
dc.description.seriesinformationVolume Graphicsen_US
dc.identifier.isbn1-58113-745-1en_US
dc.identifier.issn1727-8376en_US
dc.identifier.urihttps://doi.org/10.2312/VG/VG03/119-128en_US
dc.publisherThe Eurographics Associationen_US
dc.titleHybrid Forward Resampling and Volume Renderingen_US
Files