Mashita, TomohiroPlopski, AlexanderKudo, AkiraHöllerer, TobiasKiyokawa, KiyoshiTakemura, HaruoDirk Reiners and Daisuke Iwai and Frank Steinicke2016-12-072016-12-072016978-3-03868-012-31727-530Xhttps://doi.org/10.2312/egve.20161437https://diglib.eg.org:443/handle/10.2312/egve20161437Localizing the user from a feature database of a scene is a basic and necessary step for presentation of localized augmented reality (AR) content. Commonly such a database depicts a single appearance of the scene, due to time and effort required to prepare it. However, the appearance depends on various factors, e.g., the position of the sun and cloudiness. Observing the scene under different lighting conditions results in a decreased success rate and accuracy of the localization. To address this we propose to generate the feature database from a simulated appearance of the scene model under a number of different lighting conditions. We also propose to extend the feature descriptors used in the localization with a parametric representation of their changes under varying lighting conditions. We compare our method with a standard representation and matching based on L2-norm in a simulation and real world experiments. Our results show that our simulated environment is a satisfactory representation of the scene’s appearance and improves feature matching over a single database. The proposed feature descriptor achieves a higher localization ratio with fewer feature points and a lower process cost.I.4.7 [Image Processing and Computer Vision]Feature MeasurementFeature representationSimulation based Camera Localization under a Variable Lighting Environment10.2312/egve.2016143769-76