Search Results for author: Tobi Delbruck

Found 32 papers, 7 papers with code

Exploiting Symmetric Temporally Sparse BPTT for Efficient RNN Training

no code implementations14 Dec 2023 Xi Chen, Chang Gao, Zuowen Wang, Longbiao Cheng, Sheng Zhou, Shih-Chii Liu, Tobi Delbruck

Implementing online training of RNNs on the edge calls for optimized algorithms for an efficient deployment on hardware.

Incremental Learning

Shining light on the DVS pixel: A tutorial and discussion about biasing and optimization

no code implementations10 Apr 2023 Rui Graça, Brian Mcreynolds, Tobi Delbruck

The operation of the DVS event camera is controlled by the user through adjusting different bias parameters.

Optimal biasing and physical limits of DVS event noise

no code implementations8 Apr 2023 Rui Graca, Brian Mcreynolds, Tobi Delbruck

Under dim lighting conditions, the output of Dynamic Vision Sensor (DVS) event cameras is strongly affected by noise.

Exploiting Alternating DVS Shot Noise Event Pair Statistics to Reduce Background Activity

no code implementations7 Apr 2023 Brian Mcreynolds, Rui Graca, Tobi Delbruck

Dynamic Vision Sensors (DVS) record "events" corresponding to pixel-level brightness changes, resulting in data-efficient representation of a dynamic visual scene.

Deep Polarization Reconstruction With PDAVIS Events

1 code implementation CVPR 2023 Haiyang Mei, Zuowen Wang, Xin Yang, Xiaopeng Wei, Tobi Delbruck

The polarization event camera PDAVIS is a novel bio-inspired neuromorphic vision sensor that reports both conventional polarization frames and asynchronous, continuously per-pixel polarization brightness changes (polarization events) with fast temporal resolution and large dynamic range.

Utility and Feasibility of a Center Surround Event Camera

no code implementations26 Feb 2022 Tobi Delbruck, Chenghan Li, Rui Graca, Brian Mcreynolds

Standard dynamic vision sensor (DVS) event cameras output a stream of spatially-independent log-intensity brightness change events so they cannot suppress spatial redundancy.

Bio-inspired Polarization Event Camera

no code implementations2 Dec 2021 Germain Haessig, Damien Joubert, Justin Haque, Yingkai Chen, Moritz Milde, Tobi Delbruck, Viktor Gruev

The stomatopod (mantis shrimp) visual system has recently provided a blueprint for the design of paradigm-shifting polarization and multispectral imaging sensors, enabling solutions to challenging medical and remote sensing problems.

Unraveling the paradox of intensity-dependent DVS pixel noise

no code implementations17 Sep 2021 Rui Graca, Tobi Delbruck

While measurements of the logarithmic photoreceptor predict that the photoreceptor is approximately a first-order system with RMS noise voltage independent of the photocurrent, DVS output shows higher noise event rates at low light intensity.

Spartus: A 9.4 TOp/s FPGA-based LSTM Accelerator Exploiting Spatio-Temporal Sparsity

no code implementations4 Aug 2021 Chang Gao, Tobi Delbruck, Shih-Chii Liu

The pruned networks running on Spartus hardware achieve weight sparsity levels of up to 96% and 94% with negligible accuracy loss on the TIMIT and the Librispeech datasets.

speech-recognition Speech Recognition

Feedback control of event cameras

no code implementations2 May 2021 Tobi Delbruck, Rui Graca, Marcin Paluch

Dynamic vision sensor event cameras produce a variable data rate stream of brightness change events.

v2e: From Video Frames to Realistic DVS Events

3 code implementations13 Jun 2020 Yuhuang Hu, Shih-Chii Liu, Tobi Delbruck

The first experiment is object recognition with N-Caltech 101 dataset.

Object Recognition

DDD20 End-to-End Event Camera Driving Dataset: Fusing Frames and Events with Deep Learning for Improved Steering Prediction

1 code implementation18 May 2020 Yuhuang Hu, Jonathan Binas, Daniel Neil, Shih-Chii Liu, Tobi Delbruck

The dataset was captured with a DAVIS camera that concurrently streams both dynamic vision sensor (DVS) brightness change events and active pixel sensor (APS) intensity frames.

Data-Driven Neuromorphic DRAM-based CNN and RNN Accelerators

no code implementations29 Mar 2020 Tobi Delbruck, Shih-Chii Liu

The energy consumed by running large deep neural networks (DNNs) on hardware accelerators is dominated by the need for lots of fast memory to store both states and weights.

Learning to Exploit Multiple Vision Modalities by Using Grafted Networks

no code implementations ECCV 2020 Yuhuang Hu, Tobi Delbruck, Shih-Chii Liu

This paper proposes a Network Grafting Algorithm (NGA), where a new front end network driven by unconventional visual inputs replaces the front end network of a pretrained deep network that processes intensity frames.

Event-based Object Segmentation object-detection +1

Recurrent Neural Network Control of a Hybrid Dynamic Transfemoral Prosthesis with EdgeDRNN Accelerator

no code implementations8 Feb 2020 Chang Gao, Rachel Gehlhar, Aaron D. Ames, Shih-Chii Liu, Tobi Delbruck

Lower leg prostheses could improve the life quality of amputees by increasing comfort and reducing energy to locomote, but currently control methods are limited in modulating behaviors based upon the human's experience.

EdgeDRNN: Enabling Low-latency Recurrent Neural Network Edge Inference

no code implementations22 Dec 2019 Chang Gao, Antonio Rios-Navarro, Xi Chen, Tobi Delbruck, Shih-Chii Liu

This paper presents a Gated Recurrent Unit (GRU) based recurrent neural network (RNN) accelerator called EdgeDRNN designed for portable edge computing.

Edge-computing

Dynamic Vision Sensor integration on FPGA-based CNN accelerators for high-speed visual classification

no code implementations17 May 2019 Alejandro Linares-Barranco, Antonio Rios-Navarro, Ricardo Tapiador-Morales, Tobi Delbruck

The use of dynamic vision sensors (DVS) that emulate the behavior of a biological retina is taking an incremental importance to improve this applications due to its nature, where the information is represented by a continuous stream of spikes and the frames to be processed by the CNN are constructed collecting a fixed number of these spikes (called events).

General Classification

Closing the Accuracy Gap in an Event-Based Visual Recognition Task

no code implementations6 May 2019 Bodo Rückauer, Nicolas Känzig, Shih-Chii Liu, Tobi Delbruck, Yulia Sandamirskaya

Mobile and embedded applications require neural networks-based pattern recognition systems to perform well under a tight computational budget.

Event-based Vision: A Survey

1 code implementation17 Apr 2019 Guillermo Gallego, Tobi Delbruck, Garrick Orchard, Chiara Bartolozzi, Brian Taba, Andrea Censi, Stefan Leutenegger, Andrew Davison, Joerg Conradt, Kostas Daniilidis, Davide Scaramuzza

Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur.

Event-based vision

EV-IMO: Motion Segmentation Dataset and Learning Pipeline for Event Cameras

no code implementations18 Mar 2019 Anton Mitrokhin, Chengxi Ye, Cornelia Fermuller, Yiannis Aloimonos, Tobi Delbruck

In addition to camera egomotion and a dense depth map, the network estimates pixel-wise independently moving object segmentation and computes per-object 3D translational velocities for moving objects.

Motion Segmentation Object +1

ABMOF: A Novel Optical Flow Algorithm for Dynamic Vision Sensors

no code implementations10 May 2018 Min Liu, Tobi Delbruck

The precise event timing, sparse output, and wide dynamic range of the events are well suited for optical flow, but conventional optical flow (OF) algorithms are not well matched to the event stream data.

Optical Flow Estimation

ADaPTION: Toolbox and Benchmark for Training Convolutional Neural Networks with Reduced Numerical Precision Weights and Activation

no code implementations13 Nov 2017 Moritz B. Milde, Daniel Neil, Alessandro Aimar, Tobi Delbruck, Giacomo Indiveri

Using the ADaPTION tools, we quantized several CNNs including VGG16 down to 16-bit weights and activations with only 0. 8% drop in Top-1 accuracy.

Quantization

DDD17: End-To-End DAVIS Driving Dataset

1 code implementation4 Nov 2017 Jonathan Binas, Daniel Neil, Shih-Chii Liu, Tobi Delbruck

Event cameras, such as dynamic vision sensors (DVS), and dynamic and active-pixel vision sensors (DAVIS) can supplement other autonomous driving sensors by providing a concurrent stream of standard active pixel sensor (APS) images and DVS temporal contrast events.

Autonomous Driving

A Low Power, Fully Event-Based Gesture Recognition System

no code implementations CVPR 2017 Arnon Amir, Brian Taba, David Berg, Timothy Melano, Jeffrey McKinstry, Carmelo Di Nolfo, Tapan Nayak, Alexander Andreopoulos, Guillaume Garreau, Marcela Mendoza, Jeff Kusnitz, Michael Debole, Steve Esser, Tobi Delbruck, Myron Flickner, Dharmendra Modha

We present the first gesture recognition system implemented end-to-end on event-based hardware, using a TrueNorth neurosynaptic processor to recognize hand gestures in real-time at low power from events streamed live by a Dynamic Vision Sensor (DVS).

Gesture Recognition

NullHop: A Flexible Convolutional Neural Network Accelerator Based on Sparse Representations of Feature Maps

no code implementations5 Jun 2017 Alessandro Aimar, Hesham Mostafa, Enrico Calabrese, Antonio Rios-Navarro, Ricardo Tapiador-Morales, Iulia-Alexandra Lungu, Moritz B. Milde, Federico Corradi, Alejandro Linares-Barranco, Shih-Chii Liu, Tobi Delbruck

By exploiting sparsity, NullHop achieves an efficiency of 368%, maintains over 98% utilization of the MAC units, and achieves a power efficiency of over 3TOp/s/W in a core area of 6. 3mm$^2$.

Delta Networks for Optimized Recurrent Network Computation

no code implementations ICML 2017 Daniel Neil, Jun Haeng Lee, Tobi Delbruck, Shih-Chii Liu

Similarly, on the large Wall Street Journal speech recognition benchmark even existing networks can be greatly accelerated as delta networks, and a 5. 7x improvement with negligible loss of accuracy can be obtained through training.

speech-recognition Speech Recognition

The Event-Camera Dataset and Simulator: Event-based Data for Pose Estimation, Visual Odometry, and SLAM

2 code implementations26 Oct 2016 Elias Mueggler, Henri Rebecq, Guillermo Gallego, Tobi Delbruck, Davide Scaramuzza

New vision sensors, such as the Dynamic and Active-pixel Vision sensor (DAVIS), incorporate a conventional global-shutter camera and an event-based sensor in the same pixel array.

Motion Estimation Pose Estimation +1

Training Deep Spiking Neural Networks using Backpropagation

no code implementations31 Aug 2016 Jun Haeng Lee, Tobi Delbruck, Michael Pfeiffer

Deep spiking neural networks (SNNs) hold great potential for improving the latency and energy efficiency of deep neural networks through event-based computation.

Event-based vision

Event-based, 6-DOF Camera Tracking from Photometric Depth Maps

1 code implementation12 Jul 2016 Guillermo Gallego, Jon E. A. Lund, Elias Mueggler, Henri Rebecq, Tobi Delbruck, Davide Scaramuzza

Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames.

Steering a Predator Robot using a Mixed Frame/Event-Driven Convolutional Neural Network

no code implementations30 Jun 2016 Diederik Paul Moeys, Federico Corradi, Emmett Kerr, Philip Vance, Gautham Das, Daniel Neil, Dermot Kerr, Tobi Delbruck

The CNN is trained and run on data from a Dynamic and Active Pixel Sensor (DAVIS) mounted on a Summit XL robot (the predator), which follows another one (the prey).

Cannot find the paper you are looking for? You can Submit a new open access paper.