Search Results for author: Bernard Gosselin

Found 7 papers, 2 papers with code

Induced Feature Selection by Structured Pruning

no code implementations20 Mar 2023 Nathan Hubens, Victor Delvigne, Matei Mancas, Bernard Gosselin, Marius Preda, Titus Zaharia

The advent of sparsity inducing techniques in neural networks has been of a great help in the last few years.

feature selection

People Tracking and Re-Identifying in Distributed Contexts: Extension Study of PoseTReID

1 code implementation20 May 2022 Ratha Siv, Matei Mancas, Bernard Gosselin, Dona Valy, Sokchenda Sreng

We use a well-known bounding box detector YOLO (v4) for the detection to compare to OpenPose which was used in our last paper, and we use SORT and DeepSORT to compare to centroid which was also used previously, and most importantly for the re-identification, we use a bunch of deep leaning methods such as MLFN, OSNet, and OSNet-AIN with our custom classification layer to compare to FaceNet which was also used earlier in our last paper.

Improve Convolutional Neural Network Pruning by Maximizing Filter Variety

no code implementations11 Mar 2022 Nathan Hubens, Matei Mancas, Bernard Gosselin, Marius Preda, Titus Zaharia

This technique ensures that the criteria of selection focuses on redundant filters, while retaining the rare ones, thus maximizing the variety of remaining filters.

Network Pruning

An Experimental Study of the Impact of Pre-training on the Pruning of a Convolutional Neural Network

no code implementations15 Dec 2021 Nathan Hubens, Matei Mancas, Bernard Gosselin, Marius Preda, Titus Zaharia

Neural networks usually involve a large number of parameters, which correspond to the weights of the network.

DeepRare: Generic Unsupervised Visual Attention Models

no code implementations23 Sep 2021 Phutphalla Kong, Matei Mancas, Bernard Gosselin, Kimtho Po

In this paper, we propose a new visual attention model called DeepRare2021 (DR21) which uses the power of DNNs feature extraction and the genericity of feature-engineered algorithms.

One-Cycle Pruning: Pruning ConvNets Under a Tight Training Budget

1 code implementation5 Jul 2021 Nathan Hubens, Matei Mancas, Bernard Gosselin, Marius Preda, Titus Zaharia

Most of the time, sparsity is introduced using a three-stage pipeline: 1) train the model to convergence, 2) prune the model according to some criterion, 3) fine-tune the pruned model to recover performance.

Cannot find the paper you are looking for? You can Submit a new open access paper.