Search Results for author: Shih-Chii Liu

Found 35 papers, 9 papers with code

v2e: From Video Frames to Realistic DVS Events

3 code implementations13 Jun 2020 Yuhuang Hu, Shih-Chii Liu, Tobi Delbruck

The first experiment is object recognition with N-Caltech 101 dataset.

Object Recognition

NeuroBench: A Framework for Benchmarking Neuromorphic Computing Algorithms and Systems

1 code implementation10 Apr 2023 Jason Yik, Korneel Van den Berghe, Douwe den Blanken, Younes Bouhadjar, Maxime Fabre, Paul Hueber, Denis Kleyko, Noah Pacik-Nelson, Pao-Sheng Vincent Sun, Guangzhi Tang, Shenqi Wang, Biyan Zhou, Soikat Hasan Ahmed, George Vathakkattil Joseph, Benedetto Leto, Aurora Micheli, Anurag Kumar Mishra, Gregor Lenz, Tao Sun, Zergham Ahmed, Mahmoud Akl, Brian Anderson, Andreas G. Andreou, Chiara Bartolozzi, Arindam Basu, Petrut Bogdan, Sander Bohte, Sonia Buckley, Gert Cauwenberghs, Elisabetta Chicca, Federico Corradi, Guido de Croon, Andreea Danielescu, Anurag Daram, Mike Davies, Yigit Demirag, Jason Eshraghian, Tobias Fischer, Jeremy Forest, Vittorio Fra, Steve Furber, P. Michael Furlong, William Gilpin, Aditya Gilra, Hector A. Gonzalez, Giacomo Indiveri, Siddharth Joshi, Vedant Karia, Lyes Khacef, James C. Knight, Laura Kriener, Rajkumar Kubendran, Dhireesha Kudithipudi, Yao-Hong Liu, Shih-Chii Liu, Haoyuan Ma, Rajit Manohar, Josep Maria Margarit-Taulé, Christian Mayr, Konstantinos Michmizos, Dylan Muir, Emre Neftci, Thomas Nowotny, Fabrizio Ottati, Ayca Ozcelikkale, Priyadarshini Panda, Jongkil Park, Melika Payvand, Christian Pehle, Mihai A. Petrovici, Alessandro Pierro, Christoph Posch, Alpha Renner, Yulia Sandamirskaya, Clemens JS Schaefer, André van Schaik, Johannes Schemmel, Samuel Schmidgall, Catherine Schuman, Jae-sun Seo, Sadique Sheik, Sumit Bam Shrestha, Manolis Sifalakis, Amos Sironi, Matthew Stewart, Kenneth Stewart, Terrence C. Stewart, Philipp Stratmann, Jonathan Timcheck, Nergis Tömen, Gianvito Urgese, Marian Verhelst, Craig M. Vineyard, Bernhard Vogginger, Amirreza Yousefzadeh, Fatima Tuz Zohora, Charlotte Frenkel, Vijay Janapa Reddi

The NeuroBench framework introduces a common set of tools and systematic methodology for inclusive benchmark measurement, delivering an objective reference framework for quantifying neuromorphic approaches in both hardware-independent (algorithm track) and hardware-dependent (system track) settings.

Benchmarking

DDD17: End-To-End DAVIS Driving Dataset

1 code implementation4 Nov 2017 Jonathan Binas, Daniel Neil, Shih-Chii Liu, Tobi Delbruck

Event cameras, such as dynamic vision sensors (DVS), and dynamic and active-pixel vision sensors (DAVIS) can supplement other autonomous driving sensors by providing a concurrent stream of standard active pixel sensor (APS) images and DVS temporal contrast events.

Autonomous Driving

DDD20 End-to-End Event Camera Driving Dataset: Fusing Frames and Events with Deep Learning for Improved Steering Prediction

1 code implementation18 May 2020 Yuhuang Hu, Jonathan Binas, Daniel Neil, Shih-Chii Liu, Tobi Delbruck

The dataset was captured with a DAVIS camera that concurrently streams both dynamic vision sensor (DVS) brightness change events and active pixel sensor (APS) intensity frames.

3ET: Efficient Event-based Eye Tracking using a Change-Based ConvLSTM Network

1 code implementation22 Aug 2023 Qinyu Chen, Zuowen Wang, Shih-Chii Liu, Chang Gao

This paper presents a sparse Change-Based Convolutional Long Short-Term Memory (CB-ConvLSTM) model for event-based eye tracking, key for next-generation wearable healthcare technology such as AR/VR headsets.

Pupil Tracking

Recurrent Neural Networks With Limited Numerical Precision

1 code implementation21 Nov 2016 Joachim Ott, Zhouhan Lin, Ying Zhang, Shih-Chii Liu, Yoshua Bengio

Recurrent Neural Networks (RNNs) produce state-of-art performance on many machine learning tasks but their demand on resources in terms of memory and computational power are often high.

Quantization

Recurrent Neural Networks With Limited Numerical Precision

1 code implementation24 Aug 2016 Joachim Ott, Zhouhan Lin, Ying Zhang, Shih-Chii Liu, Yoshua Bengio

We present results from the use of different stochastic and deterministic reduced precision training methods applied to three major RNN types which are then tested on several datasets.

Binarization

Overcoming the vanishing gradient problem in plain recurrent networks

no code implementations ICLR 2018 Yuhuang Hu, Adrian Huber, Jithendar Anumula, Shih-Chii Liu

Plain recurrent networks greatly suffer from the vanishing gradient problem while Gated Neural Networks (GNNs) such as Long-short Term Memory (LSTM) and Gated Recurrent Unit (GRU) deliver promising results in many sequence learning tasks through sophisticated network designs.

Permuted-MNIST Question Answering

NullHop: A Flexible Convolutional Neural Network Accelerator Based on Sparse Representations of Feature Maps

no code implementations5 Jun 2017 Alessandro Aimar, Hesham Mostafa, Enrico Calabrese, Antonio Rios-Navarro, Ricardo Tapiador-Morales, Iulia-Alexandra Lungu, Moritz B. Milde, Federico Corradi, Alejandro Linares-Barranco, Shih-Chii Liu, Tobi Delbruck

By exploiting sparsity, NullHop achieves an efficiency of 368%, maintains over 98% utilization of the MAC units, and achieves a power efficiency of over 3TOp/s/W in a core area of 6. 3mm$^2$.

Sensor Transformation Attention Networks

no code implementations ICLR 2018 Stefan Braun, Daniel Neil, Enea Ceolini, Jithendar Anumula, Shih-Chii Liu

Recent work on encoder-decoder models for sequence-to-sequence mapping has shown that integrating both temporal and spatial attention mechanisms into neural networks increases the performance of the system substantially.

Delta Networks for Optimized Recurrent Network Computation

no code implementations ICML 2017 Daniel Neil, Jun Haeng Lee, Tobi Delbruck, Shih-Chii Liu

Similarly, on the large Wall Street Journal speech recognition benchmark even existing networks can be greatly accelerated as delta networks, and a 5. 7x improvement with negligible loss of accuracy can be obtained through training.

speech-recognition Speech Recognition

Precise neural network computation with imprecise analog devices

no code implementations23 Jun 2016 Jonathan Binas, Daniel Neil, Giacomo Indiveri, Shih-Chii Liu, Michael Pfeiffer

The operations used for neural network computation map favorably onto simple analog circuits, which outshine their digital counterparts in terms of compactness and efficiency.

Memory and information processing in neuromorphic systems

no code implementations10 Jun 2015 Giacomo Indiveri, Shih-Chii Liu

We describe the advantages of the different approaches being pursued and present the challenges that need to be addressed for building artificial neural processing systems that can display the richness of behaviors seen in biological systems.

Reducing state updates via Gaussian-gated LSTMs

no code implementations22 Jan 2019 Matthew Thornton, Jithendar Anumula, Shih-Chii Liu

Finally, by employing a temporal curriculum learning schedule for the g-LSTM, we can reduce the convergence time of the equivalent LSTM network on long sequences.

Closing the Accuracy Gap in an Event-Based Visual Recognition Task

no code implementations6 May 2019 Bodo Rückauer, Nicolas Känzig, Shih-Chii Liu, Tobi Delbruck, Yulia Sandamirskaya

Mobile and embedded applications require neural networks-based pattern recognition systems to perform well under a tight computational budget.

Learning to Exploit Multiple Vision Modalities by Using Grafted Networks

no code implementations ECCV 2020 Yuhuang Hu, Tobi Delbruck, Shih-Chii Liu

This paper proposes a Network Grafting Algorithm (NGA), where a new front end network driven by unconventional visual inputs replaces the front end network of a pretrained deep network that processes intensity frames.

Event-based Object Segmentation object-detection +1

Data-Driven Neuromorphic DRAM-based CNN and RNN Accelerators

no code implementations29 Mar 2020 Tobi Delbruck, Shih-Chii Liu

The energy consumed by running large deep neural networks (DNNs) on hardware accelerators is dominated by the need for lots of fast memory to store both states and weights.

EdgeDRNN: Enabling Low-latency Recurrent Neural Network Edge Inference

no code implementations22 Dec 2019 Chang Gao, Antonio Rios-Navarro, Xi Chen, Tobi Delbruck, Shih-Chii Liu

This paper presents a Gated Recurrent Unit (GRU) based recurrent neural network (RNN) accelerator called EdgeDRNN designed for portable edge computing.

Edge-computing

Recurrent Neural Network Control of a Hybrid Dynamic Transfemoral Prosthesis with EdgeDRNN Accelerator

no code implementations8 Feb 2020 Chang Gao, Rachel Gehlhar, Aaron D. Ames, Shih-Chii Liu, Tobi Delbruck

Lower leg prostheses could improve the life quality of amputees by increasing comfort and reducing energy to locomote, but currently control methods are limited in modulating behaviors based upon the human's experience.

Prospects for Analog Circuits in Deep Networks

no code implementations23 Jun 2021 Shih-Chii Liu, John Paul Strachan, Arindam Basu

Emerging dense non-volatile memory technologies can help to provide on-chip memory and analog circuits can be well suited to implement the needed multiplication-vector operations coupled with in-computing memory approaches.

BIG-bench Machine Learning

Spartus: A 9.4 TOp/s FPGA-based LSTM Accelerator Exploiting Spatio-Temporal Sparsity

no code implementations4 Aug 2021 Chang Gao, Tobi Delbruck, Shih-Chii Liu

The pruned networks running on Spartus hardware achieve weight sparsity levels of up to 96% and 94% with negligible accuracy loss on the TIMIT and the Librispeech datasets.

speech-recognition Speech Recognition

Gaussian-gated LSTM: Improved convergence by reducing state updates

no code implementations27 Sep 2018 Matthew Thornton, Jithendar Anumula, Shih-Chii Liu

Finally, by employing a temporal curriculum learning schedule for the g-LSTM, we can reduce the convergence time of the equivalent LSTM network on long sequences.

T-NGA: Temporal Network Grafting Algorithm for Learning to Process Spiking Audio Sensor Events

no code implementations7 Feb 2022 Shu Wang, Yuhuang Hu, Shih-Chii Liu

This work proposes a self-supervised method called Temporal Network Grafting Algorithm (T-NGA), which grafts a recurrent network pretrained on spectrogram features so that the network works with the cochlea event features.

speech-recognition Speech Recognition

Exploiting Spatial Sparsity for Event Cameras with Visual Transformers

no code implementations10 Feb 2022 Zuowen Wang, Yuhuang Hu, Shih-Chii Liu

The input to the ViT consists of events that are accumulated into time bins and spatially separated into non-overlapping sub-regions called patches.

Spiking Cochlea with System-level Local Automatic Gain Control

no code implementations14 Feb 2022 Ilya Kiselev, Chang Gao, Shih-Chii Liu

The bandpass filter gain of a channel is adapted dynamically to the input amplitude so that the average output spike rate stays within a defined range.

regression

Kernel Modulation: A Parameter-Efficient Method for Training Convolutional Neural Networks

no code implementations29 Mar 2022 Yuhuang Hu, Shih-Chii Liu

This work proposes a novel parameter-efficient kernel modulation (KM) method that adapts all parameters of a base network instead of a subset of layers.

Meta-Learning Model Compression +1

Optimizing the Consumption of Spiking Neural Networks with Activity Regularization

no code implementations4 Apr 2022 Simon Narduzzi, Siavash A. Bigdeli, Shih-Chii Liu, L. Andrea Dunbar

Reducing energy consumption is a critical point for neural network models running on edge devices.

Continuous-Time Analog Filters for Audio Edge Intelligence: Review on Circuit Designs

no code implementations6 Jun 2022 Kwantae Kim, Shih-Chii Liu

Edge audio devices can reduce data bandwidth requirements by pre-processing input speech on the device before transmission to the cloud.

Keyword Spotting

Biologically-Inspired Continual Learning of Human Motion Sequences

no code implementations2 Nov 2022 Joachim Ott, Shih-Chii Liu

This work proposes a model for continual learning on tasks involving temporal sequences, specifically, human motions.

Continual Learning Temporal Sequences

Exploiting Symmetric Temporally Sparse BPTT for Efficient RNN Training

no code implementations14 Dec 2023 Xi Chen, Chang Gao, Zuowen Wang, Longbiao Cheng, Sheng Zhou, Shih-Chii Liu, Tobi Delbruck

Implementing online training of RNNs on the edge calls for optimized algorithms for an efficient deployment on hardware.

Incremental Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.