Our method leverages an off-the-shelf object detector to identify visual objects from unlabeled images, and then language queries for these objects are obtained in an unsupervised fashion with a pseudo-query generation module.
Spatial redundancy widely exists in visual recognition tasks, i. e., discriminative features in an image or video frame usually correspond to only a subset of pixels, while the remaining regions are irrelevant to the task at hand.
Recent works have shown that the computational efficiency of video recognition can be significantly improved by reducing the spatial redundancy.
In this paper, we explore the spatial redundancy in video recognition with the aim to improve the computational efficiency.
Reusing features in deep networks through dense connectivity is an effective way to achieve high computational efficiency.