Paper

SSSC-AM: A Unified Framework for Video Co-Segmentation by Structured Sparse Subspace Clustering with Appearance and Motion Features

Video co-segmentation refers to the task of jointly segmenting common objects appearing in a given group of videos. In practice, high-dimensional data such as videos can be conceptually thought as being drawn from a union of subspaces corresponding to categories rather than from a smooth manifold. Therefore, segmenting data into respective subspaces --- subspace clustering --- finds widespread applications in computer vision, including co-segmentation. State-of-the-art methods via subspace clustering seek to solve the problem in two steps: First, an affinity matrix is built from data, with appearance features or motion patterns. Second, the data are segmented by applying spectral clustering to the affinity matrix. However, this process is insufficient to obtain an optimal solution since it does not take into account the {\em interdependence} of the affinity matrix with the segmentation. In this work, we present a novel unified video co-segmentation framework inspired by the recent Structured Sparse Subspace Clustering ($\mathrm{S^{3}C}$) based on the {\em self-expressiveness} model. Our method yields more consistent segmentation results. In order to improve the detectability of motion features with missing trajectories due to occlusion or tracked points moving out of frames, we add an extra-dimensional signature to the motion trajectories. Moreover, we reformulate the $\mathrm{S^{3}C}$ algorithm by adding the affine subspace constraint in order to make it more suitable to segment rigid motions lying in affine subspaces of dimension at most $3$. Our experiments on MOViCS dataset show that our framework achieves the highest overall performance among baseline algorithms and demonstrate its robustness to heavy noise.

Results in Papers With Code
(↓ scroll down to see all results)