Search Results for author: Marco Fariselli

Found 2 papers, 0 papers with code

Accelerating RNN-based Speech Enhancement on a Multi-Core MCU with Mixed FP16-INT8 Post-Training Quantization

no code implementations14 Oct 2022 Manuele Rusci, Marco Fariselli, Martin Croome, Francesco Paci, Eric Flamand

Differently from a uniform 8-bit quantization that degrades the PESQ score by 0. 3 on average, the Mixed-Precision PTQ scheme leads to a low-degradation of only 0. 06, while achieving a 1. 4-1. 7x memory saving.

Quantization Speech Enhancement

Leveraging Automated Mixed-Low-Precision Quantization for tiny edge microcontrollers

no code implementations12 Aug 2020 Manuele Rusci, Marco Fariselli, Alessandro Capotondi, Luca Benini

The severe on-chip memory limitations are currently preventing the deployment of the most accurate Deep Neural Network (DNN) models on tiny MicroController Units (MCUs), even if leveraging an effective 8-bit quantization scheme.

Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.