HATS: Histograms of Averaged Time Surfaces for Robust Event-based Object Classification

Event-based cameras have recently drawn the attention of the Computer Vision community thanks to their advantages in terms of high temporal resolution, low power consumption and high dynamic range, compared to traditional frame-based cameras. These properties make event-based cameras an ideal choice for autonomous vehicles, robot navigation or UAV vision, among others. However, the accuracy of event-based object classification algorithms, which is of crucial importance for any reliable system working in real-world conditions, is still far behind their frame-based counterparts. Two main reasons for this performance gap are: 1. The lack of effective low-level representations and architectures for event-based object classification and 2. The absence of large real-world event-based datasets. In this paper we address both problems. First, we introduce a novel event-based feature representation together with a new machine learning architecture. Compared to previous approaches, we use local memory units to efficiently leverage past temporal information and build a robust event-based representation. Second, we release the first large real-world event-based dataset for object classification. We compare our method to the state-of-the-art with extensive experiments, showing better classification performance and real-time computation.

PDF Abstract CVPR 2018 PDF CVPR 2018 Abstract


Introduced in the Paper:


Used in the Paper:

CIFAR-10 N-Caltech 101 N-ImageNet

Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Robust classification N-ImageNet HATS Accuracy (%) 33.41 # 10


No methods listed for this paper. Add relevant methods here