Co-segmentation for Space-Time Co-located Collections
Hadar Averbuch-Elor       Johannes Kopf       Tamir Hazan       Daniel Cohen-Or
Tel Aviv University       Facebook       Technion       Tel Aviv University
Appearing in The Visual Computer 2018
Input 1

We present a co-segmentation technique for space-time co-located image collections. These prevalent collections capture various dynamic events, usually by multiple photographers, and may contain multiple co-occurring objects which are not necessarily part of the intended foreground object, resulting in ambiguities for traditional co-segmentation techniques. Thus, to disambiguate what the common foreground object is, we introduce a weakly-supervised technique, where we assume only a small seed, given in the form of a single segmented image. We take a distributed approach, where local belief models are propagated and reinforced with similar images. Our technique progressively expands the foreground and background belief models across the entire collection. The technique exploits the power of the entire set of image without building a global model, and thus successfully overcomes large variability in appearance of the common foreground object. We demonstrate that our method outperforms previous co-segmentation techniques on challenging space-time co-located collections, including dense benchmark datasets which were adapted for our novel problem setting.

Resources

Paper (PDF)
Supplementary Material (PDF)
Matlab code
Datasets
Results

BibTex Reference

@article{elor2017coseg,
  title={Co-segmentation for Space-Time Co-located Collections},
  author={Averbuch-Elor, Hadar and Kopf, Johannes and Hazan, Tamir and Cohen-Or, Daniel},
  journal={arXiv preprint arXiv:1701.08931},
  year={2017}
}

Last update to the page: February 14, 2017.