Search Results for author: Grace Li Zhang

Found 10 papers, 1 papers with code

Logic Design of Neural Networks for High-Throughput and Low-Power Applications

no code implementations19 Sep 2023 Kangwei Xu, Grace Li Zhang, Ulf Schlichtmann, Bing Li

However, under a given area constraint, the number of MAC units in such platforms is limited, so MAC units have to be reused to perform MAC operations in a neural network.

Computational and Storage Efficient Quadratic Neurons for Deep Neural Networks

no code implementations10 Jun 2023 Chuangtao Chen, Grace Li Zhang, Xunzhao Yin, Cheng Zhuo, Ulf Schlichtmann, Bing Li

Deep neural networks (DNNs) have been widely deployed across diverse domains such as computer vision and natural language processing.

Image Classification Semantic Segmentation

PowerPruning: Selecting Weights and Activations for Power-Efficient Neural Network Acceleration

no code implementations24 Mar 2023 Richard Petri, Grace Li Zhang, Yiran Chen, Ulf Schlichtmann, Bing Li

To address this challenge, we propose PowerPruning, a novel method to reduce power consumption in digital neural network accelerators by selecting weights that lead to less power consumption in MAC operations.

Efficient Neural Network

Class-based Quantization for Neural Networks

no code implementations27 Nov 2022 Wenhao Sun, Grace Li Zhang, Huaxi Gu, Bing Li, Ulf Schlichtmann

In the proposed method, the importance score of each filter or neuron with respect to the number of classes in the dataset is first evaluated.

Quantization

SteppingNet: A Stepping Neural Network with Incremental Accuracy Enhancement

no code implementations27 Nov 2022 Wenhao Sun, Grace Li Zhang, Xunzhao Yin, Cheng Zhuo, Huaxi Gu, Bing Li, Ulf Schlichtmann

In such platforms, neural networks need to provide acceptable results quickly and the accuracy of the results should be able to be enhanced dynamically according to the computational resources available in the computing system.

Autonomous Vehicles

CorrectNet: Robustness Enhancement of Analog In-Memory Computing for Neural Networks by Error Suppression and Compensation

no code implementations27 Nov 2022 Amro Eldebiky, Grace Li Zhang, Georg Boecherer, Bing Li, Ulf Schlichtmann

These acceleration platforms rely on analog properties of the devices and thus suffer from process variations and noise.

Cannot find the paper you are looking for? You can Submit a new open access paper.