Towards Streaming Perception

ECCV 2020  ·  Mengtian Li, Yu-Xiong Wang, Deva Ramanan ·

Embodied perception refers to the ability of an autonomous agent to perceive its environment so that it can (re)act. The responsiveness of the agent is largely governed by latency of its processing pipeline. While past work has studied the algorithmic trade-off between latency and accuracy, there has not been a clear metric to compare different methods along the Pareto optimal latency-accuracy curve. We point out a discrepancy between standard offline evaluation and real-time applications: by the time an algorithm finishes processing a particular frame, the surrounding world has changed. To these ends, we present an approach that coherently integrates latency and accuracy into a single metric for real-time online perception, which we refer to as "streaming accuracy". The key insight behind this metric is to jointly evaluate the output of the entire perception stack at every time instant, forcing the stack to consider the amount of streaming data that should be ignored while computation is occurring. More broadly, building upon this metric, we introduce a meta-benchmark that systematically converts any single-frame task into a streaming perception task. We focus on the illustrative tasks of object detection and instance segmentation in urban video streams, and contribute a novel dataset with high-quality and temporally-dense annotations. Our proposed solutions and their empirical analysis demonstrate a number of surprising conclusions: (1) there exists an optimal "sweet spot" that maximizes streaming accuracy along the Pareto optimal latency-accuracy curve, (2) asynchronous tracking and future forecasting naturally emerge as internal representations that enable streaming perception, and (3) dynamic scheduling can be used to overcome temporal aliasing, yielding the paradoxical result that latency is sometimes minimized by sitting idle and "doing nothing".

PDF Abstract ECCV 2020 PDF ECCV 2020 Abstract

Datasets


Introduced in the Paper:

Argoverse-HD

Used in the Paper:

MS COCO Argoverse

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Real-Time Object Detection Argoverse-HD (Detection-Only, Test) Official challenge baseline AP 13.61 # 3
Real-Time Object Detection Argoverse-HD (Detection-Only, Val) Official challenge baseline AP 14.91 # 2
Real-Time Object Detection Argoverse-HD (Full-Stack, Test) Official challenge baseline AP 21.06 # 3
Real-Time Object Detection Argoverse-HD (Full-Stack, Val) Official challenge baseline AP 21.06 # 2

Methods


No methods listed for this paper. Add relevant methods here