In this research, we propose a new low-precision framework, TENT, to leverage the benefits of a tapered fixed-point numerical format in TinyML models.
We also illustrate that the system offers 3. 46X reduction in latency and 77. 02X reduction in power consumption when compared to a custom CMOS digital design implemented at the same technology node.
The SARS-CoV-2 infectious outbreak has rapidly spread across the globe and precipitated varying policies to effectuate physical distancing to ameliorate its impact.
Populations and Evolution
In the $128\times128$ network, it is observed that the number of input patterns the multistate synapse can classify is $\simeq$ 2. 1x that of a simple binary synapse model, at a mean accuracy of $\geq$ 75% .
Additionally, the framework is amenable for different quantization approaches and supports mixed-precision floating point and fixed-point numerical formats.
Recently, the posit numerical format has shown promise for DNN data representation and compute with ultra-low precision ([5.. 8]-bit).
Echo state networks are computationally lightweight reservoir models inspired by the random projections observed in cortical circuitry.
Our results indicate that posits are a natural fit for DNN inference, outperforming at $\leq$8-bit precision, and can be realized with competitive resource requirements relative to those of floating point.
We propose a precision-adaptable FPGA soft core for exact multiply-and-accumulate for uniform comparison across three numerical formats, fixed, floating-point and posit.
Conventional reduced-precision numerical formats, such as fixed-point and floating point, cannot accurately represent deep neural network parameters with a nonlinear distribution and small dynamic range.
The spatial pooler architecture is synthesized on Xilinx ZYNQ-7, with 91. 16% classification accuracy for MNIST and 90\% accuracy for EUNF, with noise.
Neuro-inspired recurrent neural network algorithms, such as echo state networks, are computationally lightweight and thereby map well onto untethered devices.
Performing the inference step of deep learning in resource constrained environments, such as embedded devices, is challenging.
We create, develop and implement a family of predictably optimal robust and stable ensemble of Echo State Networks via regularizing the training and perturbing the input.
Analyzing spatio-temporal data like video is a challenging task that requires processing visual and temporal information effectively.
Hierarchical temporal memory (HTM) is an emerging machine learning algorithm, with the potential to provide a means to perform predictions on spatiotemporal data.