Search Results for author: Andrea Bragagnolo

Found 4 papers, 3 papers with code

To update or not to update? Neurons at equilibrium in deep models

1 code implementation19 Jul 2022 Andrea Bragagnolo, Enzo Tartaglione, Marco Grangetto

Recent advances in deep learning optimization showed that, with some a-posteriori information on fully-trained models, it is possible to match the same performance by simply training a subset of their parameters.

SeReNe: Sensitivity based Regularization of Neurons for Structured Sparsity in Neural Networks

1 code implementation7 Feb 2021 Enzo Tartaglione, Andrea Bragagnolo, Francesco Odierna, Attilio Fiandrotti, Marco Grangetto

Deep neural networks include millions of learnable parameters, making their deployment over resource-constrained devices problematic.

LOss-Based SensiTivity rEgulaRization: towards deep sparse neural networks

no code implementations16 Nov 2020 Enzo Tartaglione, Andrea Bragagnolo, Attilio Fiandrotti, Marco Grangetto

LOBSTER (LOss-Based SensiTivity rEgulaRization) is a method for training neural networks having a sparse topology.

Pruning artificial neural networks: a way to find well-generalizing, high-entropy sharp minima

1 code implementation30 Apr 2020 Enzo Tartaglione, Andrea Bragagnolo, Marco Grangetto

Recently, a race towards the simplification of deep networks has begun, showing that it is effectively possible to reduce the size of these models with minimal or no performance loss.

Transfer Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.