no code implementations • 1 Dec 2022 • Franyell Silfa, Jose Maria Arnau, Antonio González
In this regard, BNNs are very amenable to edge devices since they employ 1-bit to store the inputs and weights, and thus, their storage requirements are low.
no code implementations • 14 Feb 2022 • Franyell Silfa, Jose-Maria Arnau, Antonio González
In this paper, we observe that the output of a neuron exhibits small changes in consecutive invocations.~We exploit this property to build a neuron-level fuzzy memoization scheme, which dynamically caches each neuron's output and reuses it whenever it is predicted that the current output will be similar to a previously computed result, avoiding in this way the output computations.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +2
no code implementations • 22 Sep 2020 • Franyell Silfa, Jose Maria Arnau, Antonio Gonzalez
However, RNN batching requires a large amount of padding since the batched input sequences may largely differ in length.
no code implementations • 7 Nov 2019 • Franyell Silfa, Jose-Maria Arnau, Antonio Gonzàlez
Based on this observation, we implement a novel hardware scheme that tracks the evolution of the elements in the LSTM cell state and dynamically selects the appropriate precision in each time step.
no code implementations • 20 Nov 2017 • Franyell Silfa, Gem Dot, Jose-Maria Arnau, Antonio Gonzalez
The main goal of E-PUR is to support large recurrent neural networks for low-power mobile devices.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +2