1 code implementation • 8 Apr 2024 • Jan Klhufek, Miroslav Safar, Vojtech Mrazek, Zdenek Vasicek, Lukas Sekanina
Energy efficiency and memory footprint of a convolutional neural network (CNN) implemented on a CNN inference accelerator depend on many factors, including a weight quantization strategy (i. e., data types and bit-widths) and mapping (i. e., placement and scheduling of DNN elementary operations on hardware units of the accelerator).
no code implementations • 8 Apr 2024 • Michal Pinos, Lukas Sekanina, Vojtech Mrazek
Integrating the principles of approximate computing into the design of hardware-aware deep neural networks (DNN) has led to DNNs implementations showing good output quality and highly optimized hardware parameters such as low latency or inference energy.
no code implementations • 16 Aug 2021 • Lukas Sekanina
The evolutionary approximation has been applied at all levels of design abstraction and in many different applications.
no code implementations • 28 Jan 2021 • Michal Pinos, Vojtech Mrazek, Lukas Sekanina
During the NAS process, a suitable CNN architecture is evolved together with approximate multipliers to deliver the best trade-offs between the accuracy, network size, and power consumption.
no code implementations • 5 Mar 2020 • Milan Ceska, Jiri Matyas, Vojtech Mrazek, Lukas Sekanina, Zdenek Vasicek, Tomas Vojnar
We present a novel approach for designing complex approximate arithmetic circuits that trade correctness for power consumption and play important role in many energy-aware applications.
1 code implementation • 21 Feb 2020 • Filip Vaverka, Vojtech Mrazek, Zdenek Vasicek, Lukas Sekanina
In order to address this issue, we propose an efficient emulation method for approximate circuits utilized in a given DNN accelerator which is emulated on GPU.
no code implementations • 15 Oct 2019 • Filip Badan, Lukas Sekanina
Automated design methods for convolutional neural networks (CNNs) have recently been developed in order to increase the design productivity.
1 code implementation • 11 Jun 2019 • Vojtech Mrazek, Zdenek Vasicek, Lukas Sekanina, Muhammad Abdullah Hanif, Muhammad Shafique
A suitable approximate multiplier is then selected for each computing element from a library of approximate multipliers in such a way that (i) one approximate multiplier serves several layers, and (ii) the overall classification error and energy consumption are minimized.
no code implementations • 11 Mar 2019 • Zdenek Vasicek, Vojtech Mrazek, Lukas Sekanina
We propose an application-tailored data-driven fully automated method for functional approximation of combinational circuits.
2 code implementations • 22 Feb 2019 • Vojtech Mrazek, Muhammad Abdullah Hanif, Zdenek Vasicek, Lukas Sekanina, Muhammad Shafique
Because these libraries contain from tens to thousands of approximate implementations for a single arithmetic operation it is intractable to find an optimal combination of approximate circuits in the library even for an application consisting of a few operations.