Search Results for author: Ulf Schlichtmann

Found 13 papers, 1 papers with code

Logic Design of Neural Networks for High-Throughput and Low-Power Applications

no code implementations19 Sep 2023 Kangwei Xu, Grace Li Zhang, Ulf Schlichtmann, Bing Li

However, under a given area constraint, the number of MAC units in such platforms is limited, so MAC units have to be reused to perform MAC operations in a neural network.

Computational and Storage Efficient Quadratic Neurons for Deep Neural Networks

no code implementations10 Jun 2023 Chuangtao Chen, Grace Li Zhang, Xunzhao Yin, Cheng Zhuo, Ulf Schlichtmann, Bing Li

Deep neural networks (DNNs) have been widely deployed across diverse domains such as computer vision and natural language processing.

Image Classification Semantic Segmentation

Fused Depthwise Tiling for Memory Optimization in TinyML Deep Neural Network Inference

1 code implementation31 Mar 2023 Rafael Stahl, Daniel Mueller-Gritschneder, Ulf Schlichtmann

It improves TinyML memory optimization significantly by reducing memory of models where this was not possible before and additionally providing alternative design points for models that show high run time overhead with existing methods.

Gesture Recognition Scheduling

PowerPruning: Selecting Weights and Activations for Power-Efficient Neural Network Acceleration

no code implementations24 Mar 2023 Richard Petri, Grace Li Zhang, Yiran Chen, Ulf Schlichtmann, Bing Li

To address this challenge, we propose PowerPruning, a novel method to reduce power consumption in digital neural network accelerators by selecting weights that lead to less power consumption in MAC operations.

Efficient Neural Network

Biologically Plausible Learning on Neuromorphic Hardware Architectures

no code implementations29 Dec 2022 Christopher Wolters, Brady Taylor, Edward Hanson, Xiaoxuan Yang, Ulf Schlichtmann, Yiran Chen

Using the benchmarking framework DNN+NeuroSim, we investigate the impact of hardware nonidealities and quantization on algorithm performance, as well as how network topologies and algorithm-level design choices can scale latency, energy and area consumption of a chip.

Benchmarking Quantization

Class-based Quantization for Neural Networks

no code implementations27 Nov 2022 Wenhao Sun, Grace Li Zhang, Huaxi Gu, Bing Li, Ulf Schlichtmann

In the proposed method, the importance score of each filter or neuron with respect to the number of classes in the dataset is first evaluated.

Quantization

SteppingNet: A Stepping Neural Network with Incremental Accuracy Enhancement

no code implementations27 Nov 2022 Wenhao Sun, Grace Li Zhang, Xunzhao Yin, Cheng Zhuo, Huaxi Gu, Bing Li, Ulf Schlichtmann

In such platforms, neural networks need to provide acceptable results quickly and the accuracy of the results should be able to be enhanced dynamically according to the computational resources available in the computing system.

Autonomous Vehicles

CorrectNet: Robustness Enhancement of Analog In-Memory Computing for Neural Networks by Error Suppression and Compensation

no code implementations27 Nov 2022 Amro Eldebiky, Grace Li Zhang, Georg Boecherer, Bing Li, Ulf Schlichtmann

These acceleration platforms rely on analog properties of the devices and thus suffer from process variations and noise.

Differentially Evolving Memory Ensembles: Pareto Optimization based on Computational Intelligence for Embedded Memories on a System Level

no code implementations20 Sep 2021 Felix Last, Ceren Yeni, Ulf Schlichtmann

As the relative power, performance, and area (PPA) impact of embedded memories continues to grow, proper parameterization of each of the thousands of memories on a chip is essential.

Efficient Exploration

Predicting Memory Compiler Performance Outputs using Feed-Forward Neural Networks

no code implementations5 Mar 2020 Felix Last, Max Haeberlein, Ulf Schlichtmann

A key task in the design flow of a chip is to find optimal memory compiler parametrizations which on the one hand fulfill system requirements while on the other hand optimize PPA.

Cannot find the paper you are looking for? You can Submit a new open access paper.