Event Transformer

11 Apr 2022  ·  Zhihao LI, M. Salman Asif, Zhan Ma ·

The event camera is a bio-vision inspired camera with high dynamic range, high response speed, and low power consumption, recently attracting extensive attention for its use in vast vision tasks. Unlike the conventional cameras that output intensity frame at a fixed time interval, event camera records the pixel brightness change (a.k.a., event) asynchronously (in time) and sparsely (in space). Existing methods often aggregate events occurred in a predefined temporal duration for downstream tasks, which apparently overlook varying behaviors of fine-grained temporal events. This work proposes the Event Transformer to directly process the event sequence in its native vectorized tensor format. It cascades a Local Transformer (LXformer) for exploiting the local temporal correlation, a Sparse Conformer (SCformer) for embedding the local spatial similarity, and a Global Transformer (GXformer) for further aggregating the global information in a serial means to effectively characterize the time and space correlations from input raw events for the generation of effective spatiotemporal features used for tasks. %In both LXformer and SCformer, Experimental studies have been extensively conducted in comparison to another fourteen existing algorithms upon five different datasets widely used for classification. Quantitative results report the state-of-the-arts classification accuracy and the least computational resource requirements, of the Event Transformer, making it practically attractive for event-based vision tasks.

PDF Abstract
No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods