Automatic Traffic Sign Detection and Recognition Using SegU-Net and a Modified Tversky Loss Function With L1-Constraint

Traffic sign detection is a central part of autonomous vehicle technology. Recent advances in deep learning algorithms have motivated researchers to use neural networks to perform this task. In this paper, we look at traffic sign detection as an image segmentation problem and propose a deep convolutional neural network-based approach to solve it. To this end, we propose a new network, the SegU-Net, which we form by merging the state-of-the-art segmentation architectures–SegNet and U-Net to detect traffic signs from video sequences. For training the network, we use the Tversky loss function constrained by an L1 term instead of the intersection over union loss traditionally used to train segmentation networks. We use a separate network, inspired by the VGG-16 architecture, to classify the detected signs. The networks are trained on the challenge free sequences of the CURE TSD dataset. Our proposed network outperforms the state-of-the-art object detection networks, such as the Faster R-CNN inception Resnet V2 and R-FCN Resnet 101, by a large margin and obtains a precision and recall of 94.60% and 80.21%, respectively, which is the current state of the art on this part of the dataset. In addition, the network is tested on the German Traffic Sign Detection Benchmark (GTSDB) dataset, where it achieves a precision and recall of 95.29% and 89.01%, respectively. This is on a par with the performance of the aforementioned object detection networks. These results prove the generalizability of the proposed architecture and its suitability for robust traffic sign detection in autonomous vehicles.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods