Robust pose computation from free-form curves

We present some results of our model registration system, capable of tracking an object, the model of which is known, in an image sequence. This system integrates tracking, pose determination and updating of the visible features (see figure 1). The heart of our system is the pose computation method, which handles various features (points, lines and free-form curves) in a robust way and is able to give a correct estimate of the pose even when tracking errors occur.

For more information on this work, the interested readers should refer to [1].

 

Figure 1: Overview of the system.

The bridges of Paris

The reliability of the system is shown on an augmented reality project: the illumination of the bridges of Paris. The aim was to test several candidate illumination projects for a number of bridges of the Seine, and be able to choose on computer simulations alone what project was the best. Most importantly, we wanted to evaluate the influence of this illumination on the surrounding elements.

Figure 2 shows the results of tracking, pose computation and updating for the 12th image of a 300-image panoramic sequence of the Pont Neuf (2D features are drawn in yellow, 3D features in green - or blue for those which are not yet taken into account). Figure 2.a presents the result of tracking (the pose used for initialization is the one obtained in the previous image): except for the middle arch, the other primitives are well tracked. The error on the middle arch is due to the failure of the prediction step in the tracking process due to noise in the image. Hence, the snake converges towards an erroneous contour. Figure 2.b shows the result of the robust pose computation (sections of features whose influence is reduced and feature outliers are drawn in red). The result is visually convincing, and the middle arch is discarded. Finally the discarded feature is updated in figure 2.c, and the pose is re-estimated (figure 2.d shows the reprojection of the wireframe model after this step). An example of final composition is presented in figure 3.

Three MPEG films are also available: sequence 1 shows the pose computed for each iteration (after tracking and after updating). Conventions used for the colors are the ones of figure2. sequence 2 shows the reprojection of the model after the last step of each iteration. Finally, sequence 3 shows an example of final composition.

 
a.
b.
c.
d.

Figure 2: Temporal registration in image 12. (a) Tracking of 2D features. (b) Robust pose computation. (c) Updating. (d) Re-projection of the wireframe model

  figure59

Figure 3: Result of the final composition in image 60.

Another example

Two other MPEG films (sequence 1 and sequence 2) present the results obtained on a fortified castle toy. Here the movement of the camera is more abrupt and the motion between two consecutive images can be important.



Tracking of 2D features.



Pose computation.



Updating of features.



Projection of the wireframe model.

Référence

1
G. Simon and M.-O. Berger. A two-stage robust statistical method for temporal registration from features of various type. In Proceedings of the 11th International Conference on Computer Vision, Bombay (India). January 1998.