1 code implementation • 8 Jun 2023 • Matteo Risso, Alessio Burrello, Giuseppe Maria Sarda, Luca Benini, Enrico Macii, Massimo Poncino, Marian Verhelst, Daniele Jahier Pagliari
The need to execute Deep Neural Networks (DNNs) at low latency and low power at the edge has spurred the development of new heterogeneous Systems-on-Chips (SoCs) encapsulating a diverse set of hardware accelerators.
1 code implementation • 20 Apr 2023 • Victor J. B. Jung, Arne Symons, Linyan Mei, Marian Verhelst, Luca Benini
To meet the growing need for computational power for DNNs, multiple specialized hardware architectures have been proposed.
1 code implementation • 10 Apr 2023 • Jason Yik, Korneel Van den Berghe, Douwe den Blanken, Younes Bouhadjar, Maxime Fabre, Paul Hueber, Denis Kleyko, Noah Pacik-Nelson, Pao-Sheng Vincent Sun, Guangzhi Tang, Shenqi Wang, Biyan Zhou, Soikat Hasan Ahmed, George Vathakkattil Joseph, Benedetto Leto, Aurora Micheli, Anurag Kumar Mishra, Gregor Lenz, Tao Sun, Zergham Ahmed, Mahmoud Akl, Brian Anderson, Andreas G. Andreou, Chiara Bartolozzi, Arindam Basu, Petrut Bogdan, Sander Bohte, Sonia Buckley, Gert Cauwenberghs, Elisabetta Chicca, Federico Corradi, Guido de Croon, Andreea Danielescu, Anurag Daram, Mike Davies, Yigit Demirag, Jason Eshraghian, Tobias Fischer, Jeremy Forest, Vittorio Fra, Steve Furber, P. Michael Furlong, William Gilpin, Aditya Gilra, Hector A. Gonzalez, Giacomo Indiveri, Siddharth Joshi, Vedant Karia, Lyes Khacef, James C. Knight, Laura Kriener, Rajkumar Kubendran, Dhireesha Kudithipudi, Yao-Hong Liu, Shih-Chii Liu, Haoyuan Ma, Rajit Manohar, Josep Maria Margarit-Taulé, Christian Mayr, Konstantinos Michmizos, Dylan Muir, Emre Neftci, Thomas Nowotny, Fabrizio Ottati, Ayca Ozcelikkale, Priyadarshini Panda, Jongkil Park, Melika Payvand, Christian Pehle, Mihai A. Petrovici, Alessandro Pierro, Christoph Posch, Alpha Renner, Yulia Sandamirskaya, Clemens JS Schaefer, André van Schaik, Johannes Schemmel, Samuel Schmidgall, Catherine Schuman, Jae-sun Seo, Sadique Sheik, Sumit Bam Shrestha, Manolis Sifalakis, Amos Sironi, Matthew Stewart, Kenneth Stewart, Terrence C. Stewart, Philipp Stratmann, Jonathan Timcheck, Nergis Tömen, Gianvito Urgese, Marian Verhelst, Craig M. Vineyard, Bernhard Vogginger, Amirreza Yousefzadeh, Fatima Tuz Zohora, Charlotte Frenkel, Vijay Janapa Reddi
The NeuroBench framework introduces a common set of tools and systematic methodology for inclusive benchmark measurement, delivering an objective reference framework for quantifying neuromorphic approaches in both hardware-independent (algorithm track) and hardware-dependent (system track) settings.
no code implementations • 30 Jan 2023 • Jun Yin, Stefano Damiano, Marian Verhelst, Toon van Waterschoot, Andre Guntoro
On the algorithmic side, the I-SPOT Project aims to enable detecting, localizing and tracking environmental audio signals by jointly developing microphone array processing and deep learning techniques that specifically target automotive applications.
no code implementations • 26 Aug 2022 • Maxim Bonnaerens, Matthias Freiberger, Marian Verhelst, Joni Dambre
In this work we propose a methodology to accurately evaluate and compare the performance of efficient neural network building blocks for computer vision in a hardware-aware manner.
no code implementations • 20 Mar 2022 • Zuzana Jelčicová, Marian Verhelst
Moreover, a reduction of ~87-94% operations can be achieved when only degrading the accuracy by 1-4%, speeding up the multi-head self-attention inference by a factor of ~7. 5-16.
1 code implementation • 27 Feb 2021 • Nimish Shah, Laura I. Galindez Olascoaga, Wannes Meert, Marian Verhelst
Bayesian reasoning is a powerful mechanism for probabilistic inference in smart edge-devices.
no code implementations • 21 Sep 2020 • Robby Neven, Marian Verhelst, Tinne Tuytelaars, Toon Goedemé
By first training the SGMs in a meta-learning manner on a set of common objects, during fine-tuning, the SGMs provided the model with accurate gradients to successfully learn to grasp new objects.
no code implementations • 22 Jul 2020 • Linyan Mei, Pouya Houshmand, Vikram Jain, Sebastian Giraldo, Marian Verhelst
This work introduces ZigZag, a memory-centric rapid DNN accelerator DSE framework which extends the DSE with uneven mapping opportunities, in which operands at shared memory levels are no longer bound to use the same memory levels for each loop index.
Distributed, Parallel, and Cluster Computing C.1.4; C.3; C.4
2 code implementations • 10 Mar 2020 • Colby R. Banbury, Vijay Janapa Reddi, Max Lam, William Fu, Amin Fazel, Jeremy Holleman, Xinyuan Huang, Robert Hurtado, David Kanter, Anton Lokhmotov, David Patterson, Danilo Pau, Jae-sun Seo, Jeff Sieracki, Urmish Thakker, Marian Verhelst, Poonam Yadav
In this position paper, we present the current landscape of TinyML and discuss the challenges and direction towards developing a fair and useful hardware benchmark for TinyML workloads.
1 code implementation • NeurIPS 2019 • Laura I. Galindez Olascoaga, Wannes Meert, Nimish Shah, Marian Verhelst, Guy Van Den Broeck
We showcase our framework on a mobile activity recognition scenario, and on a variety of benchmark datasets representative of the field of tractable learning and of the applications of interest.
1 code implementation • 17 Dec 2018 • Gert Dekkers, Fernando Rosas, Steven Lauwereins, Sreeraj Rajendran, Sofie Pollin, Bart Vanrumste, Toon van Waterschoot, Marian Verhelst, Peter Karsmakers
This model provides a first step of exploration prior to custom design of a smart wireless acoustic sensor, and also can be used to compare the energy consumption of different protocols.
no code implementations • 16 Apr 2018 • Bert Moons, Daniel Bankman, Lita Yang, Boris Murmann, Marian Verhelst
This paper introduces BinarEye: a digital processor for always-on Binary Convolutional Neural Networks.
no code implementations • 13 Mar 2018 • Matthijs Van keirsbilck, Bert Moons, Marian Verhelst
Performing multi-modal speech recognition - processing acoustic speech signals and lip-reading video simultaneously - significantly enhances the performance of such systems, especially in noisy environments.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +3
no code implementations • 1 Nov 2017 • Bert Moons, Koen Goetschalckx, Nick Van Berckelaer, Marian Verhelst
To this end, the energy consumption of inference is modeled for a generic hardware platform.
no code implementations • 22 Mar 2016 • Bert Moons, Bert de Brabandere, Luc van Gool, Marian Verhelst
Recently ConvNets or convolutional neural networks (CNN) have come up as state-of-the-art classification and detection algorithms, achieving near-human performance in visual detection.