Song, XiuqiangXie, WeijianLi, JiachenWang, NanZhong, FanZhang, GuofengQin, XueyingChaine, RaphaƫlleDeng, ZhigangKim, Min H.2023-10-092023-10-0920231467-8659https://doi.org/10.1111/cgf.14976https://diglib.eg.org:443/handle/10.1111/cgf14976Visual monocular 6D pose tracking methods for textureless or weakly-textured objects heavily rely on contour constraints established by the precise 3D model. However, precise models are not always available in reality, and rough models can potentially degrade tracking performance and impede the widespread usage of 3D object tracking. To address this new problem, we propose a novel tracking method that handles rough models. We reshape the rough contour through the probability map, which can avoid explicitly processing the 3D rough model itself. We further emphasize the inner region information of the object, where the points are sampled to provide color constrains. To sufficiently satisfy the assumption of small displacement between frames, the 2D translation of the object is pre-searched for a better initial pose. Finally, we combine constraints from both the contour and inner region to optimize the object pose. Experimental results demonstrate that the proposed method achieves state-of-the-art performance on both roughly and precisely modeled objects. Particularly for the highly rough model, the accuracy is significantly improved (40.4% v.s. 16.9%).CCS Concepts: Computing methodologies -> Augmented reality; Object trackingComputing methodologiesAugmented realityObject tracking3D Object Tracking for Rough Models10.1111/cgf.1497611 pages