Event-based vision
43 papers with code • 1 benchmarks • 9 datasets
An event camera, also known as a neuromorphic camera, silicon retina or dynamic vision sensor, is an imaging sensor that responds to local changes in brightness. Event cameras do not capture images using a shutter as conventional cameras do. Instead, each pixel inside an event camera operates independently and asynchronously, reporting changes in brightness as they occur and staying silent otherwise. Modern event cameras have microsecond temporal resolution, 120 dB dynamic range, and less under/overexposure and motion blur than frame cameras.
Libraries
Use these libraries to find Event-based vision models and implementationsDatasets
Latest papers with no code
3D Human Scan With A Moving Event Camera
The experimental results show that the proposed method outperforms conventional frame-based methods in the estimation accuracy of both pose and body mesh.
Re-Interpreting the Step-Response Probability Curve to Extract Fundamental Physical Parameters of Event-based Vision Sensors
Further, we demonstrate that with correct interpretation, fundamental physical parameters such as dark current and RMS noise can be accurately inferred from a collection of S-curves, leading to more accurate parameterization for high-fidelity EVS simulations.
A Neuromorphic Approach to Obstacle Avoidance in Robot Manipulation
To investigate the utility of brain-inspired sensing and data processing, we developed a neuromorphic approach to obstacle avoidance on a camera-equipped manipulator.
Ev-Edge: Efficient Execution of Event-based Vision Algorithms on Commodity Edge Platforms
On several state-of-art networks for a range of autonomous navigation tasks, Ev-Edge achieves 1. 28x-2. 05x improvements in latency and 1. 23x-2. 15x in energy over an all-GPU implementation on the NVIDIA Jetson Xavier AGX platform for single-task execution scenarios.
Towards Real-Time Fast Unmanned Aerial Vehicle Detection Using Dynamic Vision Sensors
Unmanned Aerial Vehicles (UAVs) are gaining popularity in civil and military applications.
Flow-Based Visual Stream Compression for Event Cameras
The introduced method itself is shown to achieve an average compression ratio of 2. 81 on a variety of event-camera datasets with the evaluation configuration used.
Optimising Graph Representation for Hardware Implementation of Graph Convolutional Networks for Event-based Vision
Event-based vision is an emerging research field involving processing data generated by Dynamic Vision Sensors (neuromorphic cameras).
Relating Events and Frames Based on Self-Supervised Learning and Uncorrelated Conditioning for Unsupervised Domain Adaptation
By applying self-supervised learning, the algorithm learns to align the representations of event-based data with those from frame-based camera data, thereby facilitating knowledge transfer. Furthermore, the inclusion of uncorrelated conditioning ensures that the adapted model effectively distinguishes between event-based and conventional data, enhancing its ability to classify event-based images accurately. Through empirical experimentation and evaluation, we demonstrate that our algorithm surpasses existing approaches designed for the same purpose using two benchmarks.
Asynchronous Bioplausible Neuron for Spiking Neural Networks for Event-Based Vision
Spiking Neural Networks (SNNs) offer a biologically inspired approach to computer vision that can lead to more efficient processing of visual data with reduced energy consumption.
Efficient and Low-Footprint Object Classification using Spatial Contrast
Binarized MicronNet achieves an F1-score of 94. 4% using spatial contrast, compared to only 56. 3% when using RGB input images.