Real-time object detection is the task of doing object detection in real-time with fast inference while maintaining a base level of accuracy.
|Trend||Dataset||Best Method||Paper title||Paper||Code||Compare|
We propose a deep feature pyramid architecture which makes use of inherent properties of features extracted from Convolutional Networks by capturing more generic features in the images (such as edge, color etc.)
object detection framework plays crucial role in autonomous driving.
To address this problem, we propose a multiple receptive field and small-object-focusing weakly-supervised segmentation network (MRFSWSnet) to achieve fast object detection.
We then introduce a proposal generation network to predict 3D region proposals from the generated maps and further extrude objects of interest from the whole point cloud.
In this paper, we investigate the performance degradation of SNNs in the much more challenging task of object detection.
MMNet has two major advantages: 1) For a group of successive pictures (GOP) in a compressed video stream, it runs the heavy computational network for I-frames, i. e. a few reference frames in videos, while a light-weight memory network is designed to generate features for prediction frames called P-frames; 2) Unlike establishing an additional network to explicitly model motion among frames, we directly take full advantage of both motion vectors and residual errors that are all encoded in a compressed video.
This paper introduces a live object recognition system that serves as a blind aid.
On-board real-time vehicle detection is of great significance for UAVs and other embedded mobile platforms.
This paper focuses on YOLO-LITE, a real-time object detection model developed to run on portable devices such as a laptop or cellphone lacking a Graphics Processing Unit (GPU).