Lee, Wei-TseChen, Hsin-IChen, Ming-ShiuanShen, I-ChaoChen, Bing-YuJernej Barbic and Wen-Chieh Lin and Olga Sorkine-Hornung2017-10-162017-10-1620161467-8659https://doi.org/10.1111/cgf.13277https://diglib.eg.org:443/handle/10.1111/cgf13277In virtual reality (VR) applications, the contents are usually generated by creating a 360 video panorama of a real-world scene. Although many capture devices are being released, getting high-resolution panoramas and displaying a virtual world in realtime remains challenging due to its computationally demanding nature. In this paper, we propose a real-time 360 video foveated stitching framework, that renders the entire scene in different level of detail, aiming to create a high-resolution panoramic video in real-time that can be streamed directly to the client. Our foveated stitching algorithm takes videos from multiple cameras as input, combined with measurements of human visual attention (i.e. the acuity map and the saliency map), can greatly reduce the number of pixels to be processed. We further parallelize the algorithm using GPU to achieve a responsive interface and validate our results via a user study. Our system accelerates graphics computation by a factor of 6 on a Google Cardboard display.I.3.3 [Computer Graphics]Picture/Image GenerationLine and curve generationHigh-resolution 360 Video Foveated Stitching for Real-time VR10.1111/cgf.13277115-123