no code implementations • 20 Mar 2023 • Nathan Hubens, Victor Delvigne, Matei Mancas, Bernard Gosselin, Marius Preda, Titus Zaharia
The advent of sparsity inducing techniques in neural networks has been of a great help in the last few years.
no code implementations • 11 Mar 2022 • Nathan Hubens, Matei Mancas, Bernard Gosselin, Marius Preda, Titus Zaharia
This technique ensures that the criteria of selection focuses on redundant filters, while retaining the rare ones, thus maximizing the variety of remaining filters.
no code implementations • 15 Dec 2021 • Nathan Hubens, Matei Mancas, Bernard Gosselin, Marius Preda, Titus Zaharia
Neural networks usually involve a large number of parameters, which correspond to the weights of the network.
1 code implementation • 5 Jul 2021 • Nathan Hubens, Matei Mancas, Bernard Gosselin, Marius Preda, Titus Zaharia
Most of the time, sparsity is introduced using a three-stage pipeline: 1) train the model to convergence, 2) prune the model according to some criterion, 3) fine-tune the pruned model to recover performance.