In this paper, we introduce a spherical embedding technique to position a given set of silhouettes of an object as observed from a set of cameras arbitrarily positioned around the object. Similar to previous works, we assume that the object silhouettes are the only visual cues provided, and thus traditional structure from motion (SfM) techniques based on common feature correspondence cannot be applied successfully. Our technique estimates dissimilarities among the silhouettes and embeds them directly in the rotation space SO(3). The embedding is obtained by an optimization scheme applied over the rotations represented with exponential maps. Since the measure for inter-silhouette dissimilarities contains many outliers, our key idea is to perform the embedding by only using a subset of the estimated dissimilarities. We present a technique that carefully screens for inlier-distances, and the pairwise scaled dissimilarities are embedded in a spherical space, diffeomorphic to SO(3). We show that our method outperforms spherical multi-dimensional scaling (MDS) embedding, demonstrate its performance on various multi-view sets, and highlight its robustness to outliers.
@article{littwin2015spherical, author = {Etai Littwin and Hadar Averbuch-Elor and Daniel Cohen-Or}, title = {Spherical Embedding of Inlier Silhouette Dissimilarities}, booktitle = {Computer Vision and Pattern Recognition (CVPR), 2015 IEEE Conference on}, pages = {to appear}, year = 2015, }
To appear later...
We thank Thomas Lewiner for his deep and insightful comments on exponential maps. This work was supported by the Israel Science Foundation and The Yitzhak and Chaya Weinstein Research Institute for Signal Processing.
Last update to the page: April 12, 2015.