Please find more details of this dataset at https://alex-xun-xu.github.io/ProjectPage/CVPR_18/index.html
3D motion segmentation has been the key problem in computer vision research due to the application in structure from motion and robotics. Traditional motion segmentation approaches are often evaluated on artificial dataset like Hopkins 155  and its variants. Because the vanishing camera translation effect is often overlooked, these approaches would fail in real world scenes where camera is carrying out significant translation and scene has complex structure. We proposed the KT3DMoSeg to address the 3D motion segmentation problem in real world scenes. The KT3DMoSeg dataset was created upon the KITTI benchmark  by manually selecting 22 sequences and labelling each individual foreground object. We select sequence with more significant camera translation so camera mounted on moving cars are preferred. We are interested in the interplay of multiple motions, so clips with more than 3 motions are also chosen, as long as these moving objects contain enough features for forming motion hypotheses. 22 short clips, each with 10-20 frames, are chosen for evaluation. We extract dense trajectories from each sequence using  and prune out trajectories shorter than 5 frames.
Reference  R. Tron and R. Vidal. A Benchmark for the Comparison of 3-D Motion Segmentation Algorithms. CVPR, 2007.  A. Geiger, P. Lenz, C. Stiller, and R. Urtasun. Vision meets robotics: The kitti dataset. International Journal of Robotics Research, 2013.  N. Sundaram, T. Brox, and K. Keutzer. Dense point trajectories by GPU-accelerated large displacement optical flow. In ECCV, 2010.