Search Results for author: Shih-Chii Liu

Found 31 papers, 7 papers with code

Biologically-Inspired Continual Learning of Human Motion Sequences

no code implementations2 Nov 2022 Joachim Ott, Shih-Chii Liu

This work proposes a model for continual learning on tasks involving temporal sequences, specifically, human motions.

Continual Learning Temporal Sequences

Continuous-Time Analog Filters for Audio Edge Intelligence: Review on Circuit Designs

no code implementations6 Jun 2022 Kwantae Kim, Shih-Chii Liu

Edge audio devices can reduce data bandwidth requirements by pre-processing input speech on the device before transmission to the cloud.

Keyword Spotting

Optimizing the Consumption of Spiking Neural Networks with Activity Regularization

no code implementations4 Apr 2022 Simon Narduzzi, Siavash A. Bigdeli, Shih-Chii Liu, L. Andrea Dunbar

Reducing energy consumption is a critical point for neural network models running on edge devices.

Kernel Modulation: A Parameter-Efficient Method for Training Convolutional Neural Networks

no code implementations29 Mar 2022 Yuhuang Hu, Shih-Chii Liu

This work proposes a novel parameter-efficient kernel modulation (KM) method that adapts all parameters of a base network instead of a subset of layers.

Meta-Learning Model Compression +1

Spiking Cochlea with System-level Local Automatic Gain Control

no code implementations14 Feb 2022 Ilya Kiselev, Chang Gao, Shih-Chii Liu

The bandpass filter gain of a channel is adapted dynamically to the input amplitude so that the average output spike rate stays within a defined range.


Exploiting Spatial Sparsity for Event Cameras with Visual Transformers

no code implementations10 Feb 2022 Zuowen Wang, Yuhuang Hu, Shih-Chii Liu

The input to the ViT consists of events that are accumulated into time bins and spatially separated into non-overlapping sub-regions called patches.

T-NGA: Temporal Network Grafting Algorithm for Learning to Process Spiking Audio Sensor Events

no code implementations7 Feb 2022 Shu Wang, Yuhuang Hu, Shih-Chii Liu

This work proposes a self-supervised method called Temporal Network Grafting Algorithm (T-NGA), which grafts a recurrent network pretrained on spectrogram features so that the network works with the cochlea event features.

speech-recognition Speech Recognition

Spartus: A 9.4 TOp/s FPGA-based LSTM Accelerator Exploiting Spatio-Temporal Sparsity

no code implementations4 Aug 2021 Chang Gao, Tobi Delbruck, Shih-Chii Liu

The pruned networks running on Spartus hardware achieve weight sparsity levels of up to 96% and 94% with negligible accuracy loss on the TIMIT and the Librispeech datasets.

speech-recognition Speech Recognition

Prospects for Analog Circuits in Deep Networks

no code implementations23 Jun 2021 Shih-Chii Liu, John Paul Strachan, Arindam Basu

Emerging dense non-volatile memory technologies can help to provide on-chip memory and analog circuits can be well suited to implement the needed multiplication-vector operations coupled with in-computing memory approaches.

BIG-bench Machine Learning

v2e: From Video Frames to Realistic DVS Events

3 code implementations13 Jun 2020 Yuhuang Hu, Shih-Chii Liu, Tobi Delbruck

The first experiment is object recognition with N-Caltech 101 dataset.

Object Recognition

DDD20 End-to-End Event Camera Driving Dataset: Fusing Frames and Events with Deep Learning for Improved Steering Prediction

1 code implementation18 May 2020 Yuhuang Hu, Jonathan Binas, Daniel Neil, Shih-Chii Liu, Tobi Delbruck

The dataset was captured with a DAVIS camera that concurrently streams both dynamic vision sensor (DVS) brightness change events and active pixel sensor (APS) intensity frames.

Data-Driven Neuromorphic DRAM-based CNN and RNN Accelerators

no code implementations29 Mar 2020 Tobi Delbruck, Shih-Chii Liu

The energy consumed by running large deep neural networks (DNNs) on hardware accelerators is dominated by the need for lots of fast memory to store both states and weights.

Learning to Exploit Multiple Vision Modalities by Using Grafted Networks

no code implementations ECCV 2020 Yuhuang Hu, Tobi Delbruck, Shih-Chii Liu

This paper proposes a Network Grafting Algorithm (NGA), where a new front end network driven by unconventional visual inputs replaces the front end network of a pretrained deep network that processes intensity frames.

object-detection Object Detection

Recurrent Neural Network Control of a Hybrid Dynamic Transfemoral Prosthesis with EdgeDRNN Accelerator

no code implementations8 Feb 2020 Chang Gao, Rachel Gehlhar, Aaron D. Ames, Shih-Chii Liu, Tobi Delbruck

Lower leg prostheses could improve the life quality of amputees by increasing comfort and reducing energy to locomote, but currently control methods are limited in modulating behaviors based upon the human's experience.

EdgeDRNN: Enabling Low-latency Recurrent Neural Network Edge Inference

no code implementations22 Dec 2019 Chang Gao, Antonio Rios-Navarro, Xi Chen, Tobi Delbruck, Shih-Chii Liu

This paper presents a Gated Recurrent Unit (GRU) based recurrent neural network (RNN) accelerator called EdgeDRNN designed for portable edge computing.


Closing the Accuracy Gap in an Event-Based Visual Recognition Task

no code implementations6 May 2019 Bodo Rückauer, Nicolas Känzig, Shih-Chii Liu, Tobi Delbruck, Yulia Sandamirskaya

Mobile and embedded applications require neural networks-based pattern recognition systems to perform well under a tight computational budget.

Reducing state updates via Gaussian-gated LSTMs

no code implementations22 Jan 2019 Matthew Thornton, Jithendar Anumula, Shih-Chii Liu

Finally, by employing a temporal curriculum learning schedule for the g-LSTM, we can reduce the convergence time of the equivalent LSTM network on long sequences.

Gaussian-gated LSTM: Improved convergence by reducing state updates

no code implementations27 Sep 2018 Matthew Thornton, Jithendar Anumula, Shih-Chii Liu

Finally, by employing a temporal curriculum learning schedule for the g-LSTM, we can reduce the convergence time of the equivalent LSTM network on long sequences.

Overcoming the vanishing gradient problem in plain recurrent networks

no code implementations ICLR 2018 Yuhuang Hu, Adrian Huber, Jithendar Anumula, Shih-Chii Liu

Plain recurrent networks greatly suffer from the vanishing gradient problem while Gated Neural Networks (GNNs) such as Long-short Term Memory (LSTM) and Gated Recurrent Unit (GRU) deliver promising results in many sequence learning tasks through sophisticated network designs.

Permuted-MNIST Question Answering

DDD17: End-To-End DAVIS Driving Dataset

1 code implementation4 Nov 2017 Jonathan Binas, Daniel Neil, Shih-Chii Liu, Tobi Delbruck

Event cameras, such as dynamic vision sensors (DVS), and dynamic and active-pixel vision sensors (DAVIS) can supplement other autonomous driving sensors by providing a concurrent stream of standard active pixel sensor (APS) images and DVS temporal contrast events.

Autonomous Driving

Sensor Transformation Attention Networks

no code implementations ICLR 2018 Stefan Braun, Daniel Neil, Enea Ceolini, Jithendar Anumula, Shih-Chii Liu

Recent work on encoder-decoder models for sequence-to-sequence mapping has shown that integrating both temporal and spatial attention mechanisms into neural networks increases the performance of the system substantially.

NullHop: A Flexible Convolutional Neural Network Accelerator Based on Sparse Representations of Feature Maps

no code implementations5 Jun 2017 Alessandro Aimar, Hesham Mostafa, Enrico Calabrese, Antonio Rios-Navarro, Ricardo Tapiador-Morales, Iulia-Alexandra Lungu, Moritz B. Milde, Federico Corradi, Alejandro Linares-Barranco, Shih-Chii Liu, Tobi Delbruck

By exploiting sparsity, NullHop achieves an efficiency of 368%, maintains over 98% utilization of the MAC units, and achieves a power efficiency of over 3TOp/s/W in a core area of 6. 3mm$^2$.

Delta Networks for Optimized Recurrent Network Computation

no code implementations ICML 2017 Daniel Neil, Jun Haeng Lee, Tobi Delbruck, Shih-Chii Liu

Similarly, on the large Wall Street Journal speech recognition benchmark even existing networks can be greatly accelerated as delta networks, and a 5. 7x improvement with negligible loss of accuracy can be obtained through training.

speech-recognition Speech Recognition

Recurrent Neural Networks With Limited Numerical Precision

1 code implementation21 Nov 2016 Joachim Ott, Zhouhan Lin, Ying Zhang, Shih-Chii Liu, Yoshua Bengio

Recurrent Neural Networks (RNNs) produce state-of-art performance on many machine learning tasks but their demand on resources in terms of memory and computational power are often high.


Recurrent Neural Networks With Limited Numerical Precision

1 code implementation24 Aug 2016 Joachim Ott, Zhouhan Lin, Ying Zhang, Shih-Chii Liu, Yoshua Bengio

We present results from the use of different stochastic and deterministic reduced precision training methods applied to three major RNN types which are then tested on several datasets.


Precise neural network computation with imprecise analog devices

no code implementations23 Jun 2016 Jonathan Binas, Daniel Neil, Giacomo Indiveri, Shih-Chii Liu, Michael Pfeiffer

The operations used for neural network computation map favorably onto simple analog circuits, which outshine their digital counterparts in terms of compactness and efficiency.

Memory and information processing in neuromorphic systems

no code implementations10 Jun 2015 Giacomo Indiveri, Shih-Chii Liu

We describe the advantages of the different approaches being pursued and present the challenges that need to be addressed for building artificial neural processing systems that can display the richness of behaviors seen in biological systems.

Cannot find the paper you are looking for? You can Submit a new open access paper.