Rethinking Lightweight Convolutional Neural Networks for Efficient and High-quality Pavement Crack Detection

13 Sep 2021  ·  Kai Li, Jie Yang, Siwei Ma, Bo wang, Shanshe Wang, Yingjie Tian, Zhiquan Qi ·

Pixel-level road crack detection has always been a challenging task in intelligent transportation systems. Due to the external environments, such as weather, light, and other factors, pavement cracks often present low contrast, poor continuity, and different sizes in length and width. However, most of the existing studies pay less attention to crack data under different situations. Meanwhile, recent algorithms based on deep convolutional neural networks (DCNNs) have promoted the development of cutting-edge models for crack detection. Nevertheless, they usually focus on complex models for good performance, but ignore detection efficiency in practical applications. In this article, to address the first issue, we collected two new databases (i.e. Rain365 and Sun520) captured in rainy and sunny days respectively, which enrich the data of the open source community. For the second issue, we reconsider how to improve detection efficiency with excellent performance, and then propose our lightweight encoder-decoder architecture termed CarNet. Specifically, we introduce a novel olive-shaped structure for the encoder network, a light-weight multi-scale block and a new up-sampling method in the decoder network. Numerous experiments show that our model can better balance detection performance and efficiency compared with previous models. Especially, on the Sun520 dataset, our CarNet significantly advances the state-of-the-art performance with ODS F-score from 0.488 to 0.514. Meanwhile, it does so with an improved detection speed (104 frame per second) which is orders of magnitude faster than some recent DCNNs-based algorithms specially designed for crack detection.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here