Paper

Self-Configurable Stabilized Real-Time Detection Learning for Autonomous Driving Applications

Guaranteeing real-time and accurate object detection simultaneously is paramount in autonomous driving environments. However, the existing object detection neural network systems are characterized by a tradeoff between computation time and accuracy, making it essential to optimize such a tradeoff. Fortunately, in many autonomous driving environments, images come in a continuous form, providing an opportunity to use optical flow. In this paper, we improve the performance of an object detection neural network utilizing optical flow estimation. In addition, we propose a Lyapunov optimization framework for time-average performance maximization subject to stability. It adaptively determines whether to use optical flow to suit the dynamic vehicle environment, thereby ensuring the vehicle's queue stability and the time-average maximum performance simultaneously. To verify the key ideas, we conduct numerical experiments with various object detection neural networks and optical flow estimation networks. In addition, we demonstrate the self-configurable stabilized detection with YOLOv3-tiny and FlowNet2-S, which are the real-time object detection network and an optical flow estimation network, respectively. In the demonstration, our proposed framework improves the accuracy by 3.02%, the number of detected objects by 59.6%, and the queue stability for computing capabilities.

Results in Papers With Code
(↓ scroll down to see all results)