BURST: A Benchmark for Unifying Object Recognition, Segmentation and Tracking in Video

Multiple existing benchmarks involve tracking and segmenting objects in video e.g., Video Object Segmentation (VOS) and Multi-Object Tracking and Segmentation (MOTS), but there is little interaction between them due to the use of disparate benchmark datasets and metrics (e.g. J&F, mAP, sMOTSA). As a result, published works usually target a particular benchmark, and are not easily comparable to each another. We believe that the development of generalized methods that can tackle multiple tasks requires greater cohesion among these research sub-communities. In this paper, we aim to facilitate this by proposing BURST, a dataset which contains thousands of diverse videos with high-quality object masks, and an associated benchmark with six tasks involving object tracking and segmentation in video. All tasks are evaluated using the same data and comparable metrics, which enables researchers to consider them in unison, and hence, more effectively pool knowledge from different methods across different tasks. Additionally, we demonstrate several baselines for all tasks and show that approaches for one task can be applied to another with a quantifiable and explainable performance difference. Dataset annotations and evaluation code is available at: https://github.com/Ali2500/BURST-benchmark.

PDF Abstract

Datasets


Results from the Paper


Ranked #4 on Long-tail Video Object Segmentation on BURST-val (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Long-tail Video Object Segmentation BURST-val Box Tracker HOTA (all) 8.2 # 4
mAP (all) 1.4 # 4
HOTA (com) 27.0 # 4
mAP (com) 3.0 # 4
HOTA (unc) 3.6 # 4
mAP (unc) 0.9 # 4
Long-tail Video Object Segmentation BURST-val STCN Tracker HOTA (all) 5.5 # 5
mAP (all) 0.9 # 5
HOTA (com) 17.5 # 5
mAP (com) 0.7 # 5
HOTA (unc) 2.5 # 5
mAP (unc) 0.6 # 5

Methods


No methods listed for this paper. Add relevant methods here