Search Results for author: Nathan Hubens

Found 9 papers, 1 papers with code

A Recipe for Efficient SBIR Models: Combining Relative Triplet Loss with Batch Normalization and Knowledge Distillation

no code implementations30 May 2023 Omar Seddati, Nathan Hubens, Stéphane Dupont, Thierry Dutoit

Then, we introduce a Relative Triplet Loss (RTL), an adapted triplet loss to overcome those limitations through loss weighting based on anchors similarity.

Data Augmentation Knowledge Distillation +2

Induced Feature Selection by Structured Pruning

no code implementations20 Mar 2023 Nathan Hubens, Victor Delvigne, Matei Mancas, Bernard Gosselin, Marius Preda, Titus Zaharia

The advent of sparsity inducing techniques in neural networks has been of a great help in the last few years.

feature selection

FasterAI: A Lightweight Library for Creating Sparse Neural Networks

no code implementations3 Jul 2022 Nathan Hubens

FasterAI is a PyTorch-based library, aiming to facilitate the utilization of deep neural networks compression techniques such as sparsification, pruning, knowledge distillation, or regularization.

Knowledge Distillation

Improve Convolutional Neural Network Pruning by Maximizing Filter Variety

no code implementations11 Mar 2022 Nathan Hubens, Matei Mancas, Bernard Gosselin, Marius Preda, Titus Zaharia

This technique ensures that the criteria of selection focuses on redundant filters, while retaining the rare ones, thus maximizing the variety of remaining filters.

Network Pruning

Where Is My Mind (looking at)? Predicting Visual Attention from Brain Activity

no code implementations11 Jan 2022 Victor Delvigne, Noé Tits, Luca La Fisca, Nathan Hubens, Antoine Maiorca, Hazem Wannous, Thierry Dutoit, Jean-Philippe Vandeborre

The codes and dataset considered in this paper have been made available at \url{https://figshare. com/s/3e353bd1c621962888ad} to promote research in the field.

EEG

An Experimental Study of the Impact of Pre-training on the Pruning of a Convolutional Neural Network

no code implementations15 Dec 2021 Nathan Hubens, Matei Mancas, Bernard Gosselin, Marius Preda, Titus Zaharia

Neural networks usually involve a large number of parameters, which correspond to the weights of the network.

One-Cycle Pruning: Pruning ConvNets Under a Tight Training Budget

1 code implementation5 Jul 2021 Nathan Hubens, Matei Mancas, Bernard Gosselin, Marius Preda, Titus Zaharia

Most of the time, sparsity is introduced using a three-stage pipeline: 1) train the model to convergence, 2) prune the model according to some criterion, 3) fine-tune the pruned model to recover performance.

Modulated Self-attention Convolutional Network for VQA

no code implementations8 Oct 2019 Jean-Benoit Delbrouck, Antoine Maiorca, Nathan Hubens, Stéphane Dupont

As new data-sets for real-world visual reasoning and compositional question answering are emerging, it might be needed to use the visual feature extraction as a end-to-end process during training.

Question Answering Visual Question Answering +1

Cannot find the paper you are looking for? You can Submit a new open access paper.