Search Results for author: Henk Corporaal

Found 8 papers, 2 papers with code

BOMP-NAS: Bayesian Optimization Mixed Precision NAS

no code implementations27 Jan 2023 David van Son, Floran de Putter, Sebastian Vogel, Henk Corporaal

Bayesian Optimization Mixed-Precision Neural Architecture Search (BOMP-NAS) is an approach to quantization-aware neural architecture search (QA-NAS) that leverages both Bayesian optimization (BO) and mixed-precision quantization (MP) to efficiently search for compact, high performance deep neural networks.

Neural Architecture Search Quantization

THOR -- A Neuromorphic Processor with 7.29G TSOP$^2$/mm$^2$Js Energy-Throughput Efficiency

no code implementations3 Dec 2022 Mayank Senapati, Manil Dev Gomony, Sherif Eissa, Charlotte Frenkel, Henk Corporaal

Neuromorphic computing using biologically inspired Spiking Neural Networks (SNNs) is a promising solution to meet Energy-Throughput (ET) efficiency needed for edge computing devices.


LEAPER: Fast and Accurate FPGA-based System Performance Prediction via Transfer Learning

no code implementations22 Aug 2022 Gagandeep Singh, Dionysios Diamantopoulos, Juan Gómez-Luna, Sander Stuijk, Henk Corporaal, Onur Mutlu

The key idea of LEAPER is to transfer an ML-based performance and resource usage model trained for a low-end edge environment to a new, high-end cloud environment to provide fast and accurate predictions for accelerator implementation.

Design Synthesis Transfer Learning

How to train accurate BNNs for embedded systems?

no code implementations24 Jun 2022 Floran de Putter, Henk Corporaal

To reduce the accuracy gap between binary and full-precision networks, many repair methods have been proposed in the recent past, which we have classified and put into a single overview in this chapter.


DominoSearch: Find layer-wise fine-grained N:M sparse schemes from dense neural networks

1 code implementation NeurIPS 2021 Wei Sun, Aojun Zhou, Sander Stuijk, Rob Wijnhoven, Andrew Oakleigh Nelson, Hongsheng Li, Henk Corporaal

However, the existing N:M algorithms only address the challenge of how to train N:M sparse neural networks in a uniform fashion (i. e. every layer has the same N:M sparsity) and suffer from a significant accuracy drop for high sparsity (i. e. when sparsity > 80\%).

Network Pruning

Quantization of Deep Neural Networks for Accumulator-constrained Processors

no code implementations24 Apr 2020 Barry de Bruin, Zoran Zivkovic, Henk Corporaal

We demonstrate that 16-bit accumulators are able to obtain a classification accuracy within 1\% of the floating-point baselines on the CIFAR-10 and ILSVRC2012 image classification benchmarks.

Classification General Classification +2

LocalNorm: Robust Image Classification through Dynamically Regularized Normalization

no code implementations18 Feb 2019 Bojian Yin, Siebren Schaafsma, Henk Corporaal, H. Steven Scholte, Sander M. Bohte

While modern convolutional neural networks achieve outstanding accuracy on many image classification tasks, they are, compared to humans, much more sensitive to image degradation.

Classification General Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.