no code implementations • 15 Jan 2022 • Igor Fedorov, Ramon Matas, Hokchhay Tann, Chuteng Zhou, Matthew Mattina, Paul Whatmough
Deploying TinyML models on low-cost IoT hardware is very challenging, due to limited device memory capacity.
1 code implementation • 8 Sep 2019 • Hokchhay Tann, Heng Zhao, Sherief Reda
To attain accurate and efficient FCN models, we propose a three-step SW/HW co-design methodology consisting of FCN architectural exploration, precision quantization, and hardware acceleration.
no code implementations • 23 Jan 2018 • Hokchhay Tann, Soheil Hashemi, Sherief Reda
In addition, DNNs are typically deployed in ensemble to boost accuracy performance, which further exacerbates the system requirements.
no code implementations • 11 May 2017 • Hokchhay Tann, Soheil Hashemi, Iris Bahar, Sherief Reda
In addition, we propose a hardware accelerator design to achieve low-power, low-latency inference with insignificant degradation in accuracy.
no code implementations • 12 Dec 2016 • Soheil Hashemi, Nicholas Anthony, Hokchhay Tann, R. Iris Bahar, Sherief Reda
While a large number of dedicated hardware using different precisions has recently been proposed, there exists no comprehensive study of different bit precisions and arithmetic in both inputs and network parameters.
no code implementations • 19 Jul 2016 • Hokchhay Tann, Soheil Hashemi, R. Iris Bahar, Sherief Reda
We present a novel dynamic configuration technique for deep neural networks that permits step-wise energy-accuracy trade-offs during runtime.