Search Results for author: Daniel Neil

Found 12 papers, 3 papers with code

DDD20 End-to-End Event Camera Driving Dataset: Fusing Frames and Events with Deep Learning for Improved Steering Prediction

1 code implementation18 May 2020 Yuhuang Hu, Jonathan Binas, Daniel Neil, Shih-Chii Liu, Tobi Delbruck

The dataset was captured with a DAVIS camera that concurrently streams both dynamic vision sensor (DVS) brightness change events and active pixel sensor (APS) intensity frames.

Interpretable Graph Convolutional Neural Networks for Inference on Noisy Knowledge Graphs

no code implementations1 Dec 2018 Daniel Neil, Joss Briody, Alix Lacoste, Aaron Sim, Paidi Creed, Amir Saffari

In this work, we provide a new formulation for Graph Convolutional Neural Networks (GCNNs) for link prediction on graph data that addresses common challenges for biomedical knowledge graphs (KGs).

Denoising Knowledge Graphs +1

ADaPTION: Toolbox and Benchmark for Training Convolutional Neural Networks with Reduced Numerical Precision Weights and Activation

no code implementations13 Nov 2017 Moritz B. Milde, Daniel Neil, Alessandro Aimar, Tobi Delbruck, Giacomo Indiveri

Using the ADaPTION tools, we quantized several CNNs including VGG16 down to 16-bit weights and activations with only 0. 8% drop in Top-1 accuracy.

Quantization

DDD17: End-To-End DAVIS Driving Dataset

1 code implementation4 Nov 2017 Jonathan Binas, Daniel Neil, Shih-Chii Liu, Tobi Delbruck

Event cameras, such as dynamic vision sensors (DVS), and dynamic and active-pixel vision sensors (DAVIS) can supplement other autonomous driving sensors by providing a concurrent stream of standard active pixel sensor (APS) images and DVS temporal contrast events.

Autonomous Driving

Sensor Transformation Attention Networks

no code implementations ICLR 2018 Stefan Braun, Daniel Neil, Enea Ceolini, Jithendar Anumula, Shih-Chii Liu

Recent work on encoder-decoder models for sequence-to-sequence mapping has shown that integrating both temporal and spatial attention mechanisms into neural networks increases the performance of the system substantially.

Delta Networks for Optimized Recurrent Network Computation

no code implementations ICML 2017 Daniel Neil, Jun Haeng Lee, Tobi Delbruck, Shih-Chii Liu

Similarly, on the large Wall Street Journal speech recognition benchmark even existing networks can be greatly accelerated as delta networks, and a 5. 7x improvement with negligible loss of accuracy can be obtained through training.

speech-recognition Speech Recognition

Steering a Predator Robot using a Mixed Frame/Event-Driven Convolutional Neural Network

no code implementations30 Jun 2016 Diederik Paul Moeys, Federico Corradi, Emmett Kerr, Philip Vance, Gautham Das, Daniel Neil, Dermot Kerr, Tobi Delbruck

The CNN is trained and run on data from a Dynamic and Active Pixel Sensor (DAVIS) mounted on a Summit XL robot (the predator), which follows another one (the prey).

Precise neural network computation with imprecise analog devices

no code implementations23 Jun 2016 Jonathan Binas, Daniel Neil, Giacomo Indiveri, Shih-Chii Liu, Michael Pfeiffer

The operations used for neural network computation map favorably onto simple analog circuits, which outshine their digital counterparts in terms of compactness and efficiency.

Cannot find the paper you are looking for? You can Submit a new open access paper.