Search Results for author: Chuteng Zhou

Found 8 papers, 3 papers with code

UDC: Unified DNAS for Compressible TinyML Models

no code implementations15 Jan 2022 Igor Fedorov, Ramon Matas, Hokchhay Tann, Chuteng Zhou, Matthew Mattina, Paul Whatmough

Emerging Internet-of-things (IoT) applications are driving deployment of neural networks (NNs) on heavily constrained low-cost hardware (HW) platforms, where accuracy is typically limited by memory capacity.

Model Compression Neural Architecture Search +2

AnalogNets: ML-HW Co-Design of Noise-robust TinyML Models and Always-On Analog Compute-in-Memory Accelerator

no code implementations10 Nov 2021 Chuteng Zhou, Fernando Garcia Redondo, Julian Büchel, Irem Boybat, Xavier Timoneda Comas, S. R. Nandakumar, Shidhartha Das, Abu Sebastian, Manuel Le Gallo, Paul N. Whatmough

We also describe AON-CiM, a programmable, minimal-area phase-change memory (PCM) analog CiM accelerator, with a novel layer-serial approach to remove the cost of complex interconnects associated with a fully-pipelined design.

Keyword Spotting

Active multi-fidelity Bayesian online changepoint detection

1 code implementation26 Mar 2021 Gregory W. Gundersen, Diana Cai, Chuteng Zhou, Barbara E. Engelhardt, Ryan P. Adams

We propose a multi-fidelity approach that makes cost-sensitive decisions about which data fidelity to collect based on maximizing information gain with respect to changepoints.

Edge-computing Time Series

Information contraction in noisy binary neural networks and its implications

no code implementations28 Jan 2021 Chuteng Zhou, Quntao Zhuang, Matthew Mattina, Paul N. Whatmough

Our SDPI can be applied to various information processing systems, including neural networks and cellular automata.

Image Classification Object Detection

Noisy Machines: Understanding Noisy Neural Networks and Enhancing Robustness to Analog Hardware Errors Using Distillation

no code implementations14 Jan 2020 Chuteng Zhou, Prad Kadambi, Matthew Mattina, Paul N. Whatmough

Hence, for successful deployment on analog accelerators, it is essential to be able to train deep neural networks to be robust to random continuous noise in the network weights, which is a somewhat new challenge in machine learning.

Knowledge Distillation

FixyNN: Efficient Hardware for Mobile Computer Vision via Transfer Learning

1 code implementation27 Feb 2019 Paul N. Whatmough, Chuteng Zhou, Patrick Hansen, Shreyas Kolala Venkataramanaiah, Jae-sun Seo, Matthew Mattina

Over a suite of six datasets we trained models via transfer learning with an accuracy loss of $<1\%$ resulting in up to 11. 2 TOPS/W - nearly $2 \times$ more efficient than a conventional programmable CNN accelerator of the same area.

General Classification Image Classification +1

Energy Efficient Hardware for On-Device CNN Inference via Transfer Learning

no code implementations4 Dec 2018 Paul Whatmough, Chuteng Zhou, Patrick Hansen, Matthew Mattina

On-device CNN inference for real-time computer vision applications can result in computational demands that far exceed the energy budgets of mobile devices.

Image Classification Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.