Search Results for author: Eyyüb Sari

Found 9 papers, 1 papers with code

Efficient Training Under Limited Resources

1 code implementation23 Jan 2023 Mahdi Zolnouri, Dounia Lakhmiri, Christophe Tribes, Eyyüb Sari, Sébastien Le Digabel

Training time budget and size of the dataset are among the factors affecting the performance of a Deep Neural Network (DNN).

Data Augmentation Neural Architecture Search

Training Integer-Only Deep Recurrent Neural Networks

no code implementations22 Dec 2022 Vahid Partovi Nia, Eyyüb Sari, Vanessa Courville, Masoud Asgharian

Recurrent neural networks (RNN) are the backbone of many text and speech applications.

Quantization

Demystifying and Generalizing BinaryConnect

no code implementations NeurIPS 2021 Tim Dockhorn, YaoLiang Yu, Eyyüb Sari, Mahdi Zolnouri, Vahid Partovi Nia

BinaryConnect (BC) and its many variations have become the de facto standard for neural network quantization.

Quantization

iRNN: Integer-only Recurrent Neural Network

no code implementations20 Sep 2021 Eyyüb Sari, Vanessa Courville, Vahid Partovi Nia

Deploying RNNs that include layer normalization and attention on integer-only arithmetic is still an open problem.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

Batch Normalization in Quantized Networks

no code implementations29 Apr 2020 Eyyüb Sari, Vahid Partovi Nia

Implementation of quantized neural networks on computing hardware leads to considerable speed up and memory saving.

Adaptive Binary-Ternary Quantization

no code implementations26 Sep 2019 Ryan Razani, Grégoire Morin, Vahid Partovi Nia, Eyyüb Sari

Ternary quantization provides a more flexible model and outperforms binary quantization in terms of accuracy, however doubles the memory footprint and increases the computational cost.

Autonomous Vehicles Image Classification +1

How Does Batch Normalization Help Binary Training?

no code implementations18 Sep 2019 Eyyüb Sari, Mouloud Belbahri, Vahid Partovi Nia

Binary Neural Networks (BNNs) are difficult to train, and suffer from drop of accuracy.

Quantization

Foothill: A Quasiconvex Regularization for Edge Computing of Deep Neural Networks

no code implementations18 Jan 2019 Mouloud Belbahri, Eyyüb Sari, Sajad Darabi, Vahid Partovi Nia

Using a quasiconvex base function in order to construct a binary quantizer helps training binary neural networks (BNNs) and adding noise to the input data or using a concrete regularization function helps to improve generalization error.

Edge-computing General Classification +4

Cannot find the paper you are looking for? You can Submit a new open access paper.