T-YOLO: Tiny Vehicle Detection Based on YOLO and Multi-Scale Convolutional Neural Networks

To solve real-life problems for different smart city applications, using deep Neural Network, such as parking occupancy detection, requires fine-tuning of these networks. For large parking, it is desirable to use a cenital-plane camera located at a high distance that allows the monitoring of the entire parking space or a large parking area with only one camera. Today’s most popular object detection models, such as YOLO, achieve good precision scores at real-time speed. However, if we use our own data different from that of the general-purpose datasets, such as COCO and ImageNet, we have a large margin for improvisation. In this paper, we propose a modified, yet lightweight, deep object detection model based on the YOLO-v5 architecture. The proposed model can detect large, small, and tiny objects. Specifically, we propose the use of a multi-scale mechanism to learn deep discriminative feature representations at different scales and automatically determine the most suitable scales for detecting objects in a scene (i.e., in our case vehicles). The proposed multi-scale module reduces the number of trainable parameters compared to the original YOLO-v5 architecture. The experimental results also demonstrate that precision is improved by a large margin. In fact, as shown in the experiments, the results show a small reduction from 7.28 million parameters of the YOLO-v5-S profile to 7.26 million parameters in our model. In addition, we reduced the detection speed by inferring 30 fps compared to the YOLO-v5-L/X profiles. In addition, the tiny vehicle detection performance was significantly improved by 33% compared to the YOLO-v5-X profile.

PDF Abstract

Datasets


Results from the Paper


 Ranked #1 on Parking Space Occupancy on PKLot (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Benchmark
Parking Space Occupancy PKLot T-YOLO Average-mAP 0.9985 # 1

Methods


No methods listed for this paper. Add relevant methods here