Zeng, QiongChen, WenzhengWang, HuanTu, ChangheCohen-Or, DanielLischinski, DaniChen, BaoquanOlga Sorkine-Hornung and Michael Wimmer2015-04-162015-04-162015https://doi.org/10.1111/cgf.12536We introduce a novel method for enabling stereoscopic viewing of a scene from a single pre-segmented image. Rather than attempting full 3D reconstruction or accurate depth map recovery, we hallucinate a rough approximation of the scene's 3D model using a number of simple depth and occlusion cues and shape priors. We begin by depth-sorting the segments, each of which is assumed to represent a separate object in the scene, resulting in a collection of depth layers. The shapes and textures of the partially occluded segments are then completed using symmetry and convexity priors. Next, each completed segment is converted to a union of generalized cylinders yielding a rough 3D model for each object. Finally, the object depths are refined using an iterative ground fitting process. The hallucinated 3D model of the scene may then be used to generate a stereoscopic image pair, or to produce images from novel viewpoints within a small neighborhood of the original view. Despite the simplicity of our approach, we show that it compares favorably with state-of-the-art depth ordering methods. A user study was conducted showing that our method produces more convincing stereoscopic images than existing semi-interactive and automatic single image depth recovery methods.I.3.7 [Computer Graphics]Picture/Image Generationviewing algorithmsI.4.8 [Image Processing and Computer Vision]Scene Analysisdepth cuesHallucinating Stereoscopy from a Single Image10.1111/cgf.12536001-012