Search Results for author: Zhenhua Zhu

Found 5 papers, 0 papers with code

EPIM: Efficient Processing-In-Memory Accelerators based on Epitome

no code implementations12 Nov 2023 Chenyu Wang, Zhen Dong, Daquan Zhou, Zhenhua Zhu, Yu Wang, Jiashi Feng, Kurt Keutzer

On the hardware side, we modify the datapath of current PIM accelerators to accommodate epitomes and implement a feature map reuse technique to reduce computation cost.

Model Compression Neural Architecture Search +1

Enabling Lower-Power Charge-Domain Nonvolatile In-Memory Computing with Ferroelectric FETs

no code implementations2 Feb 2021 Guodong Yin, Yi Cai, Juejian Wu, Zhengyang Duan, Zhenhua Zhu, Yongpan Liu, Yu Wang, Huazhong Yang, Xueqing Li

Compute-in-memory (CiM) is a promising approach to alleviating the memory wall problem for domain-specific applications.

Emerging Technologies

Explore the Potential of CNN Low Bit Training

no code implementations1 Jan 2021 Kai Zhong, Xuefei Ning, Tianchen Zhao, Zhenhua Zhu, Shulin Zeng, Guohao Dai, Yu Wang, Huazhong Yang

Through this dynamic precision framework, we can reduce the bit-width of convolution, which is the most computational cost, while keeping the training process close to the full precision floating-point training.

Quantization

Exploring the Potential of Low-bit Training of Convolutional Neural Networks

no code implementations4 Jun 2020 Kai Zhong, Xuefei Ning, Guohao Dai, Zhenhua Zhu, Tianchen Zhao, Shulin Zeng, Yu Wang, Huazhong Yang

For training a variety of models on CIFAR-10, using 1-bit mantissa and 2-bit exponent is adequate to keep the accuracy loss within $1\%$.

Quantization

FTT-NAS: Discovering Fault-Tolerant Convolutional Neural Architecture

no code implementations20 Mar 2020 Xuefei Ning, Guangjun Ge, Wenshuo Li, Zhenhua Zhu, Yin Zheng, Xiaoming Chen, Zhen Gao, Yu Wang, Huazhong Yang

By inspecting the discovered architectures, we find that the operation primitives, the weight quantization range, the capacity of the model, and the connection pattern have influences on the fault resilience capability of NN models.

Neural Architecture Search Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.