Li, KaiWang, JueLiu, YebinXu, LiDai, QionghaiEitan Grinspun and Bernd Bickel and Yoshinori Dobashi2016-10-112016-10-1120161467-8659https://doi.org/10.1111/cgf.13020https://diglib.eg.org:443/handle/10.1111/cgf13020It is a challenging task for ordinary users to capture selfies with a good scene composition, given the limited freedom to position the camera. Creative hardware (e.g., selfie sticks) and software (e.g., panoramic selfie apps) solutions have been proposed to extend the background coverage of a selife, but to achieve a perfect composition on the spot when the selfie is captured remains to be difficult. In this paper, we propose a system that allows the user to shoot a selfie video by rotating the body first, then produce a final panoramic selfie image with user-guided scene composition as postprocessing. Our key technical contribution is a fully automatic, robust multi-frame segmentation and stitching framework that is tailored towards the special characteristics of selfie images. We analyze the sparse feature points and employ a spatial-temporal optimization for bilayer feature segmentation, which leads to more reliable background alignment than previous image stitching techniques. The sparse classification is then propagated to all pixels to create dense foreground masks for person-background composition. Finally, based on a user-selected foreground position, our system uses content-preserving warping to produce a panoramic seflie with minimal distortion to the face region. Experimental results show that our approach can reliably generate high quality panoramic selfies, while a simple combination of previous image stitching and segmentation approaches often fails.I.3.3 [Computer Graphics]Picture/Image GenerationViewing AlgorithmsRe-Compositable Panoramic Selfie with Robust Multi-Frame Segmentation and Stitching10.1111/cgf.13020227-236