no code implementations • 25 Feb 2024 • Bo Liu, Grace Li Zhang, Xunzhao Yin, Ulf Schlichtmann, Bing Li
In this new design, the multipliers are replaced by simple logic gates to project the results onto a wide bit representation.
1 code implementation • 10 Dec 2023 • Mengnan Jiang, Jingcun Wang, Amro Eldebiky, Xunzhao Yin, Cheng Zhuo, Ing-Chao Lin, Grace Li Zhang
The filters that are only important for a few number of classes are removed.
no code implementations • 3 Dec 2023 • Ruidi Qiu, Amro Eldebiky, Grace Li Zhang, Xunzhao Yin, Cheng Zhuo, Ulf Schlichtmann, Bing Li
In conventional ONNs, light amplitudes are modulated at the input and detected at the output.
no code implementations • 23 Sep 2023 • Jingcun Wang, Bing Li, Grace Li Zhang
Deep neural networks (DNNs) have been successfully applied in various fields.
no code implementations • 19 Sep 2023 • Kangwei Xu, Grace Li Zhang, Ulf Schlichtmann, Bing Li
However, under a given area constraint, the number of MAC units in such platforms is limited, so MAC units have to be reused to perform MAC operations in a neural network.
no code implementations • 10 Jun 2023 • Chuangtao Chen, Grace Li Zhang, Xunzhao Yin, Cheng Zhuo, Ulf Schlichtmann, Bing Li
Deep neural networks (DNNs) have been widely deployed across diverse domains such as computer vision and natural language processing.
no code implementations • 24 Mar 2023 • Richard Petri, Grace Li Zhang, Yiran Chen, Ulf Schlichtmann, Bing Li
To address this challenge, we propose PowerPruning, a novel method to reduce power consumption in digital neural network accelerators by selecting weights that lead to less power consumption in MAC operations.
no code implementations • 27 Nov 2022 • Wenhao Sun, Grace Li Zhang, Huaxi Gu, Bing Li, Ulf Schlichtmann
In the proposed method, the importance score of each filter or neuron with respect to the number of classes in the dataset is first evaluated.
no code implementations • 27 Nov 2022 • Wenhao Sun, Grace Li Zhang, Xunzhao Yin, Cheng Zhuo, Huaxi Gu, Bing Li, Ulf Schlichtmann
In such platforms, neural networks need to provide acceptable results quickly and the accuracy of the results should be able to be enhanced dynamically according to the computational resources available in the computing system.
no code implementations • 27 Nov 2022 • Amro Eldebiky, Grace Li Zhang, Georg Boecherer, Bing Li, Ulf Schlichtmann
These acceleration platforms rely on analog properties of the devices and thus suffer from process variations and noise.