Saliency-driven Depth Compression for 3D Image Warping

Current compression methods compress depth images by incorporating 2D features, which leads to a loss of the detail of the original 3D object in the recovered depth image. The main idea of this paper is to augment 2D features with 3D geometric information to preserve important regions of the depth image. Mesh saliency is used to represent the important regions of the 3D objects, and discontinuity edges are extracted to indicate the important regions of the depth image. We use mesh saliency to guide the adaptive random sampling to generate a random pixel sample of the depth image and then, combine this sample with the depth discontinuity edge to build the sparse depth representation. During the depth reconstruction, the depth image is recovered by using an up- and down-sampling schema with Gaussian bilateral filtering. The effectiveness of the proposed method is validated through 3D image warping applications. The visual and quantitative results show a significant improvement of the synthetic image quality compared with state-of-the-art depth compression methods.

, booktitle = {
Pacific Graphics Short Papers
}, editor = {
John Keyser and Young J. Kim and Peter Wonka
}, title = {{
Saliency-driven Depth Compression for 3D Image Warping
}}, author = {
Gu, Minjie
Hu, Shanfeng
Wang, Xiaochuan
Liang, Xiaohui
Shen, Xukun
Qin, Aihong
}, year = {
}, publisher = {
The Eurographics Association
}, ISBN = {
}, DOI = {
} }