Kun Liu

Bundle Adjustment Constrained Smoothing For Multi-View Point Cloud Data

Kun Liu Rhaleb Zayer

INRIA, France

8th International Symposium on Visual Computing (ISVC), 2012


Abstract:

Direct use of denoising and mesh reconstruction algorithms on point clouds originating from multi-view images is often oblivious to the reprojection error. This can be a severe limitation in applications which require accurate point tracking, e.g., metrology. In this paper, we propose a method for improving the quality of such data without forfeiting the original matches. We formulate the problem as a robust smoothness cost function constrained by a bounded reprojection error. The arising optimization problem is addressed as a sequence of unconstrained optimization problems by virtue of the barrier method. Substantiated experiments on synthetic and acquired data compare our approach to alternative techniques.

Paper:

PDF

Images:

illustration

Figure 1: Starting from a converged bundle adjustment, our approach (left) searches for new spatial position of the 3d point while guaranteeing that the reprojection error is bounded i.e. the matches are maintained within a disk around the input matches. On the other hand, constraining the smoothing within a ball around the initial spatial position (right) can lead to larger reprojection errors as the shape of the corresponding projection (planar ellipses) is not taken into account.

boundary

Figure 2: A zoom on the ear model (left) illustrates the shrinking effect of Laplacian regularization (middle, blue). Constrained smoothing (right, blue) is more robust to such artifacts. In both results, the original data is shown in orange.

comparison

Figure 3: A noisy point cloud (left-top) is processed using BA with Laplacian regularization (middle-top) smoothing and BA constrained smoothing (right-top), all views are shown in splating mode. The middle row shows the reprojection error for the same view. The bottom row shows a zoom on the corresponding point cloud data.

statue

Figure 4: Illustration of our method on a large data set (200K points). Image correspondences across 56 views were perturbed by a gaussian noise with a unit variance and a peak of 3 which yields the noisy reconstruction (left). The result of our approach is shown to the right. Middle image show a zoom on the elephant head. All views are shown in splatting mode.

face

Figure 5: Sample images (left) out of a set of 6 wide base-line images were used to generate a quasi-dense point cloud (middle) using the propagation approach in [5]. Our result (right) shows an overall quality improvement of the point cloud. Point clouds are shown in splating mode.