no code implementations • 2 Nov 2022 • Joachim Ott, Shih-Chii Liu
This work proposes a model for continual learning on tasks involving temporal sequences, specifically, human motions.
no code implementations • 6 Jun 2022 • Kwantae Kim, Shih-Chii Liu
Edge audio devices can reduce data bandwidth requirements by pre-processing input speech on the device before transmission to the cloud.
no code implementations • 4 Apr 2022 • Simon Narduzzi, Siavash A. Bigdeli, Shih-Chii Liu, L. Andrea Dunbar
Reducing energy consumption is a critical point for neural network models running on edge devices.
no code implementations • 29 Mar 2022 • Yuhuang Hu, Shih-Chii Liu
This work proposes a novel parameter-efficient kernel modulation (KM) method that adapts all parameters of a base network instead of a subset of layers.
no code implementations • 14 Feb 2022 • Ilya Kiselev, Chang Gao, Shih-Chii Liu
The bandpass filter gain of a channel is adapted dynamically to the input amplitude so that the average output spike rate stays within a defined range.
no code implementations • 10 Feb 2022 • Zuowen Wang, Yuhuang Hu, Shih-Chii Liu
The input to the ViT consists of events that are accumulated into time bins and spatially separated into non-overlapping sub-regions called patches.
no code implementations • 7 Feb 2022 • Shu Wang, Yuhuang Hu, Shih-Chii Liu
This work proposes a self-supervised method called Temporal Network Grafting Algorithm (T-NGA), which grafts a recurrent network pretrained on spectrogram features so that the network works with the cochlea event features.
no code implementations • 4 Aug 2021 • Chang Gao, Tobi Delbruck, Shih-Chii Liu
The pruned networks running on Spartus hardware achieve weight sparsity levels of up to 96% and 94% with negligible accuracy loss on the TIMIT and the Librispeech datasets.
no code implementations • 23 Jun 2021 • Shih-Chii Liu, John Paul Strachan, Arindam Basu
Emerging dense non-volatile memory technologies can help to provide on-chip memory and analog circuits can be well suited to implement the needed multiplication-vector operations coupled with in-computing memory approaches.
3 code implementations • 13 Jun 2020 • Yuhuang Hu, Shih-Chii Liu, Tobi Delbruck
The first experiment is object recognition with N-Caltech 101 dataset.
1 code implementation • 18 May 2020 • Yuhuang Hu, Jonathan Binas, Daniel Neil, Shih-Chii Liu, Tobi Delbruck
The dataset was captured with a DAVIS camera that concurrently streams both dynamic vision sensor (DVS) brightness change events and active pixel sensor (APS) intensity frames.
no code implementations • 29 Mar 2020 • Tobi Delbruck, Shih-Chii Liu
The energy consumed by running large deep neural networks (DNNs) on hardware accelerators is dominated by the need for lots of fast memory to store both states and weights.
no code implementations • ECCV 2020 • Yuhuang Hu, Tobi Delbruck, Shih-Chii Liu
This paper proposes a Network Grafting Algorithm (NGA), where a new front end network driven by unconventional visual inputs replaces the front end network of a pretrained deep network that processes intensity frames.
no code implementations • 8 Feb 2020 • Chang Gao, Rachel Gehlhar, Aaron D. Ames, Shih-Chii Liu, Tobi Delbruck
Lower leg prostheses could improve the life quality of amputees by increasing comfort and reducing energy to locomote, but currently control methods are limited in modulating behaviors based upon the human's experience.
no code implementations • 22 Dec 2019 • Chang Gao, Antonio Rios-Navarro, Xi Chen, Tobi Delbruck, Shih-Chii Liu
This paper presents a Gated Recurrent Unit (GRU) based recurrent neural network (RNN) accelerator called EdgeDRNN designed for portable edge computing.
1 code implementation • 29 Sep 2019 • Yi Luo, Enea Ceolini, Cong Han, Shih-Chii Liu, Nima Mesgarani
Beamforming has been extensively investigated for multi-channel audio processing tasks.
no code implementations • 6 May 2019 • Bodo Rückauer, Nicolas Känzig, Shih-Chii Liu, Tobi Delbruck, Yulia Sandamirskaya
Mobile and embedded applications require neural networks-based pattern recognition systems to perform well under a tight computational budget.
no code implementations • 22 Jan 2019 • Matthew Thornton, Jithendar Anumula, Shih-Chii Liu
Finally, by employing a temporal curriculum learning schedule for the g-LSTM, we can reduce the convergence time of the equivalent LSTM network on long sequences.
no code implementations • 27 Sep 2018 • Matthew Thornton, Jithendar Anumula, Shih-Chii Liu
Finally, by employing a temporal curriculum learning schedule for the g-LSTM, we can reduce the convergence time of the equivalent LSTM network on long sequences.
no code implementations • ICLR 2018 • Yuhuang Hu, Adrian Huber, Jithendar Anumula, Shih-Chii Liu
Plain recurrent networks greatly suffer from the vanishing gradient problem while Gated Neural Networks (GNNs) such as Long-short Term Memory (LSTM) and Gated Recurrent Unit (GRU) deliver promising results in many sequence learning tasks through sophisticated network designs.
1 code implementation • 4 Nov 2017 • Jonathan Binas, Daniel Neil, Shih-Chii Liu, Tobi Delbruck
Event cameras, such as dynamic vision sensors (DVS), and dynamic and active-pixel vision sensors (DAVIS) can supplement other autonomous driving sensors by providing a concurrent stream of standard active pixel sensor (APS) images and DVS temporal contrast events.
no code implementations • ICLR 2018 • Stefan Braun, Daniel Neil, Enea Ceolini, Jithendar Anumula, Shih-Chii Liu
Recent work on encoder-decoder models for sequence-to-sequence mapping has shown that integrating both temporal and spatial attention mechanisms into neural networks increases the performance of the system substantially.
no code implementations • 5 Jun 2017 • Alessandro Aimar, Hesham Mostafa, Enrico Calabrese, Antonio Rios-Navarro, Ricardo Tapiador-Morales, Iulia-Alexandra Lungu, Moritz B. Milde, Federico Corradi, Alejandro Linares-Barranco, Shih-Chii Liu, Tobi Delbruck
By exploiting sparsity, NullHop achieves an efficiency of 368%, maintains over 98% utilization of the MAC units, and achieves a power efficiency of over 3TOp/s/W in a core area of 6. 3mm$^2$.
no code implementations • ICML 2017 • Daniel Neil, Jun Haeng Lee, Tobi Delbruck, Shih-Chii Liu
Similarly, on the large Wall Street Journal speech recognition benchmark even existing networks can be greatly accelerated as delta networks, and a 5. 7x improvement with negligible loss of accuracy can be obtained through training.
1 code implementation • 21 Nov 2016 • Joachim Ott, Zhouhan Lin, Ying Zhang, Shih-Chii Liu, Yoshua Bengio
Recurrent Neural Networks (RNNs) produce state-of-art performance on many machine learning tasks but their demand on resources in terms of memory and computational power are often high.
4 code implementations • NeurIPS 2016 • Daniel Neil, Michael Pfeiffer, Shih-Chii Liu
In this work, we introduce the Phased LSTM model, which extends the LSTM unit by adding a new time gate.
1 code implementation • 24 Aug 2016 • Joachim Ott, Zhouhan Lin, Ying Zhang, Shih-Chii Liu, Yoshua Bengio
We present results from the use of different stochastic and deterministic reduced precision training methods applied to three major RNN types which are then tested on several datasets.
no code implementations • 23 Jun 2016 • Jonathan Binas, Daniel Neil, Giacomo Indiveri, Shih-Chii Liu, Michael Pfeiffer
The operations used for neural network computation map favorably onto simple analog circuits, which outshine their digital counterparts in terms of compactness and efficiency.
no code implementations • 22 Jun 2016 • Stefan Braun, Daniel Neil, Shih-Chii Liu
The performance of automatic speech recognition systems under noisy environments still leaves room for improvement.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+2
no code implementations • 10 Jun 2015 • Giacomo Indiveri, Shih-Chii Liu
We describe the advantages of the different approaches being pursued and present the challenges that need to be addressed for building artificial neural processing systems that can display the richness of behaviors seen in biological systems.
no code implementations • 20 Nov 2014 • Shaista Hussain, Shih-Chii Liu, Arindam Basu
This work also presents a branch-specific spike-based version of this structural plasticity rule.