In recent years tremendous efforts have been done to advance the state of the art for Natural Language Processing (NLP) and audio recognition.
The results show that event-based cameras are capable of functioning in a space-like, radiative environment with a signal-to-noise ratio of 3. 355.
Optical flow is a crucial component of the feature space for early visual processing of dynamic scenes especially in new applications such as self-driving vehicles, drones and autonomous robots.
This paper presents a new event-based method for detecting and tracking features from the output of an event-based camera.
This paper introduces a framework of gesture recognition operating on the output of an event based camera using the computational resources of a mobile phone.
We introduce in this paper the principle of Deep Temporal Networks that allow to add time to convolutional networks by allowing deep integration principles not only using spatial information but also increasingly large temporal window.
This paper introduces an unsupervised time-oriented event-based machine learning algorithm building on the concept of hierarchy of temporal descriptors called time surfaces.
Compared to previous approaches, we use local memory units to efficiently leverage past temporal information and build a robust event-based representation.
This paper describes a fully spike-based neural network for optical flow estimation from Dynamic Vision Sensor data.
The asynchronous nature of these systems frees computation and communication from the rigid predetermined timing enforced by system clocks in conventional systems.
There has been significant research over the past two decades in developing new platforms for spiking neural computation.