Shekhar, SumitSemmo, AmirTrapp, MatthiasTursun, OkanPasewaldt, SebastianMyszkowski, KarolDöllner, JürgenSchulz, Hans-Jörg and Teschner, Matthias and Wimmer, Michael2019-09-292019-09-292019978-3-03868-098-7https://doi.org/10.2312/vmv.20191326https://diglib.eg.org:443/handle/10.2312/vmv20191326A convenient post-production video processing approach is to apply image filters on a per-frame basis. This allows the flexibility of extending image filters-originally designed for still images-to videos. However, per-image filtering may lead to temporal inconsistencies perceived as unpleasant flickering artifacts, which is also the case for dense light-fields due to angular inconsistencies. In this work, we present a method for consistent filtering of videos and dense light-fields that addresses these problems. Our assumption is that inconsistencies-due to per-image filtering-are represented as noise across the image sequence. We thus perform denoising across the filtered image sequence and combine per-image filtered results with their denoised versions. At this, we use saliency based optimization weights to produce a consistent output while preserving the details simultaneously. To control the degree-of-consistency in the final output, we implemented our approach in an interactive real-time processing framework. Unlike state-of-the-art inconsistency removal techniques, our approach does not rely on optic-flow for enforcing coherence. Comparisons and a qualitative evaluation indicate that our method provides better results over state-of-the-art approaches for certain types of filters and applications.Computing methodologiesImage processingComputational photographyConsistent Filtering of Videos and Dense Light-Fields Without Optic-Flow10.2312/vmv.20191326125-134