Search Results for author: Bernard Gosselin

Found 4 papers, 1 papers with code

An Experimental Study of the Impact of Pre-training on the Pruning of a Convolutional Neural Network

no code implementations15 Dec 2021 Nathan Hubens, Matei Mancas, Bernard Gosselin, Marius Preda, Titus Zaharia

Neural networks usually involve a large number of parameters, which correspond to the weights of the network.

DeepRare: Generic Unsupervised Visual Attention Models

no code implementations23 Sep 2021 Phutphalla Kong, Matei Mancas, Bernard Gosselin, Kimtho Po

In this paper, we propose a new visual attention model called DeepRare2021 (DR21) which uses the power of DNNs feature extraction and the genericity of feature-engineered algorithms.

One-Cycle Pruning: Pruning ConvNets Under a Tight Training Budget

1 code implementation5 Jul 2021 Nathan Hubens, Matei Mancas, Bernard Gosselin, Marius Preda, Titus Zaharia

Most of the time, sparsity is introduced using a three-stage pipeline: 1) train the model to convergence, 2) prune the model according to some criterion, 3) fine-tune the pruned model to recover performance.

Cannot find the paper you are looking for? You can Submit a new open access paper.