Yu, XuanWang, RuiYu, Jingyi2015-02-232015-02-2320101467-8659https://doi.org/10.1111/j.1467-8659.2010.01797.xWe present a new algorithm for efficient rendering of high-quality depth-of-field (DoF) effects. We start with a single rasterized view (reference view) of the scene, and sample the light field by warping the reference view to nearby views. We implement the algorithm using NVIDIA s CUDA to achieve parallel processing, and exploit the atomic operations to resolve visibility when multiple pixels warp to the same image location. We then directly synthesize DoF effects from the sampled light field. To reduce aliasing artifacts, we propose an image-space filtering technique that compensates for spatial undersampling using MIP mapping. The main advantages of our algorithm are its simplicity and generality. We demonstrate interactive rendering of DoF effects in several complex scenes. Compared to existing methods, ours does not require ray tracing and hence scales well with scene complexity.Real-time Depth of Field Rendering via Dynamic Light Field Generation and Filtering10.1111/j.1467-8659.2010.01797.x2099-2107