Search Results for author: Enzo Tartaglione

Found 40 papers, 20 papers with code

Domain Adaptation for Learned Image Compression with Supervised Adapters

no code implementations24 Apr 2024 Alberto Presta, Gabriele Spadaro, Enzo Tartaglione, Attilio Fiandrotti, Marco Grangetto

In Learned Image Compression (LIC), a model is trained at encoding and decoding images sampled from a source domain, often outperforming traditional codecs on natural images; yet its performance may be far from optimal on images sampled from different domains.

Debiasing surgeon: fantastic weights and how to find them

no code implementations21 Mar 2024 Rémi Nahon, Ivan Luiz De Moura Matos, Van-Tam Nguyen, Enzo Tartaglione

Nowadays an ever-growing concerning phenomenon, the emergence of algorithmic biases that can lead to unfair models, emerges.

SCoTTi: Save Computation at Training Time with an adaptive framework

1 code implementation19 Dec 2023 Ziyu Lin, Enzo Tartaglione, Van-Tam Nguyen

On-device training is an emerging approach in machine learning where models are trained on edge devices, aiming to enhance privacy protection and real-time performance.

Enhanced EEG-Based Mental State Classification : A novel approach to eliminate data leakage and improve training optimization for Machine Learning

no code implementations14 Dec 2023 Maxime Girard, Rémi Nahon, Enzo Tartaglione, Van-Tam Nguyen

In this paper, we explore prior research and introduce a new methodology for classifying mental state levels based on EEG signals utilizing machine learning (ML).

EEG

Weighted Ensemble Models Are Strong Continual Learners

1 code implementation14 Dec 2023 Imad Eddine Marouf, Subhankar Roy, Enzo Tartaglione, Stéphane Lathuilière

In this work, we study the problem of continual learning (CL) where the goal is to learn a model on a sequence of tasks, such that the data from the previous tasks becomes unavailable while learning on the current task data.

Continual Learning

Towards On-device Learning on the Edge: Ways to Select Neurons to Update under a Budget Constraint

1 code implementation8 Dec 2023 Aël Quélennec, Enzo Tartaglione, Pavlo Mozharovskyi, Van-Tam Nguyen

In the realm of efficient on-device learning under extreme memory and computation constraints, a significant gap in successful approaches persists.

Mini but Mighty: Finetuning ViTs with Mini Adapters

1 code implementation7 Nov 2023 Imad Eddine Marouf, Enzo Tartaglione, Stéphane Lathuilière

Vision Transformers (ViTs) have become one of the dominant architectures in computer vision, and pre-trained ViT models are commonly adapted to new tasks via fine-tuning.

Transfer Learning

Can Unstructured Pruning Reduce the Depth in Deep Neural Networks?

1 code implementation12 Aug 2023 Zhu Liao, Victor Quétu, Van-Tam Nguyen, Enzo Tartaglione

Pruning is a widely used technique for reducing the size of deep neural networks while maintaining their performance.

Sparse Double Descent in Vision Transformers: real or phantom threat?

1 code implementation26 Jul 2023 Victor Quétu, Marta Milovanovic, Enzo Tartaglione

Vision transformers (ViT) have been of broad interest in recent theoretical and empirical works.

Inductive Bias

Mining bias-target Alignment from Voronoi Cells

1 code implementation ICCV 2023 Rémi Nahon, Van-Tam Nguyen, Enzo Tartaglione

Despite significant research efforts, deep neural networks are still vulnerable to biases: this raises concerns about their fairness and limits their generalization.

Fairness

Learn how to Prune Pixels for Multi-view Neural Image-based Synthesis

no code implementations5 May 2023 Marta Milovanović, Enzo Tartaglione, Marco Cagnazzo, Félix Henry

Image-based rendering techniques stand at the core of an immersive experience for the user, as they generate novel views given a set of multiple input images.

Neural Rendering

Optimized preprocessing and Tiny ML for Attention State Classification

no code implementations20 Mar 2023 Yinghao Wang, Rémi Nahon, Enzo Tartaglione, Pavlo Mozharovskyi, Van-Tam Nguyen

In this paper, we present a new approach to mental state classification from EEG signals by combining signal processing techniques and machine learning (ML) algorithms.

Classification Computational Efficiency +1

DSD$^2$: Can We Dodge Sparse Double Descent and Compress the Neural Network Worry-Free?

1 code implementation2 Mar 2023 Victor Quétu, Enzo Tartaglione

Second, we introduce an entropy measure providing more insights into the insurgence of this phenomenon and enabling the use of traditional stop criteria.

Can we avoid Double Descent in Deep Neural Networks?

no code implementations26 Feb 2023 Victor Quétu, Enzo Tartaglione

Very recently, an unexpected phenomenon, the ``double descent'', has caught the attention of the deep learning community.

Unbiased Supervised Contrastive Learning

1 code implementation10 Nov 2022 Carlo Alberto Barbano, Benoit Dufumier, Enzo Tartaglione, Marco Grangetto, Pietro Gori

In this work, we tackle the problem of learning representations that are robust to biases.

Contrastive Learning

Compressing Explicit Voxel Grid Representations: fast NeRFs become also small

no code implementations23 Oct 2022 Chenxi Lola Deng, Enzo Tartaglione

NeRFs have revolutionized the world of per-scene radiance field reconstruction because of their intrinsic compactness.

Packed-Ensembles for Efficient Uncertainty Estimation

1 code implementation17 Oct 2022 Olivier Laurent, Adrien Lafage, Enzo Tartaglione, Geoffrey Daniel, Jean-Marc Martinez, Andrei Bursuc, Gianni Franchi

Deep Ensembles (DE) are a prominent approach for achieving excellent performance on key metrics such as accuracy, calibration, uncertainty estimation, and out-of-distribution detection.

Classifier calibration Image Classification +2

Information Removal at the bottleneck in Deep Neural Networks

1 code implementation30 Sep 2022 Enzo Tartaglione

Deep learning models are nowadays broadly deployed to solve an incredibly large variety of tasks.

UniToBrain dataset: a Brain Perfusion Dataset

no code implementations1 Aug 2022 Daniele Perlo, Enzo Tartaglione, Umberto Gava, Federico D'Agata, Edwin Benninck, Mauro Bergui

The CT perfusion (CTP) is a medical exam for measuring the passage of a bolus of contrast solution through the brain on a pixel-by-pixel basis.

To update or not to update? Neurons at equilibrium in deep models

1 code implementation19 Jul 2022 Andrea Bragagnolo, Enzo Tartaglione, Marco Grangetto

Recent advances in deep learning optimization showed that, with some a-posteriori information on fully-trained models, it is possible to match the same performance by simply training a subset of their parameters.

Disentangling private classes through regularization

no code implementations5 Jul 2022 Enzo Tartaglione, Francesca Gennari, Marco Grangetto

In this work we propose DisP, an approach for deep learning models disentangling the information related to some classes we desire to keep private, from the data processed by AI.

Decision Making

Unsupervised Learning of Unbiased Visual Representations

no code implementations26 Apr 2022 Carlo Alberto Barbano, Enzo Tartaglione, Marco Grangetto

We propose a fully unsupervised debiasing framework, consisting of three steps: first, we exploit the natural preference for learning malignant biases, obtaining a bias-capturing model; then, we perform a pseudo-labelling step to obtain bias labels; finally we employ state-of-the-art supervised debiasing techniques to obtain an unbiased model.

REM: Routing Entropy Minimization for Capsule Networks

no code implementations4 Apr 2022 Riccardo Renzulli, Enzo Tartaglione, Marco Grangetto

This paper proposes REM, a technique which minimizes the entropy of the parse tree-like structure, improving its explainability.

The rise of the lottery heroes: why zero-shot pruning is hard

no code implementations24 Feb 2022 Enzo Tartaglione

Recent advances in deep learning optimization showed that just a subset of parameters are really necessary to successfully train a model.

HEMP: High-order Entropy Minimization for neural network comPression

no code implementations12 Jul 2021 Enzo Tartaglione, Stéphane Lathuilière, Attilio Fiandrotti, Marco Cagnazzo, Marco Grangetto

We formulate the entropy of a quantized artificial neural network as a differentiable function that can be plugged as a regularization term into the cost function minimized by gradient descent.

Neural Network Compression Quantization +1

EnD: Entangling and Disentangling deep representations for bias correction

2 code implementations CVPR 2021 Enzo Tartaglione, Carlo Alberto Barbano, Marco Grangetto

Artificial neural networks perform state-of-the-art in an ever-growing number of tasks, and nowadays they are used to solve an incredibly large variety of tasks.

Classification (ρ=0.990) Classification (ρ=0.995) +6

SeReNe: Sensitivity based Regularization of Neurons for Structured Sparsity in Neural Networks

1 code implementation7 Feb 2021 Enzo Tartaglione, Andrea Bragagnolo, Francesco Odierna, Attilio Fiandrotti, Marco Grangetto

Deep neural networks include millions of learnable parameters, making their deployment over resource-constrained devices problematic.

A two-step explainable approach for COVID-19 computer-aided diagnosis from chest x-ray images

no code implementations25 Jan 2021 Carlo Alberto Barbano, Enzo Tartaglione, Claudio Berzovini, Marco Calandri, Marco Grangetto

Early screening of patients is a critical issue in order to assess immediate and fast responses against the spread of COVID-19.

LOss-Based SensiTivity rEgulaRization: towards deep sparse neural networks

no code implementations16 Nov 2020 Enzo Tartaglione, Andrea Bragagnolo, Attilio Fiandrotti, Marco Grangetto

LOBSTER (LOss-Based SensiTivity rEgulaRization) is a method for training neural networks having a sparse topology.

A non-discriminatory approach to ethical deep learning

no code implementations4 Aug 2020 Enzo Tartaglione, Marco Grangetto

Artificial neural networks perform state-of-the-art in an ever-growing number of tasks, nowadays they are used to solve an incredibly large variety of tasks.

Image Classification

Pruning artificial neural networks: a way to find well-generalizing, high-entropy sharp minima

1 code implementation30 Apr 2020 Enzo Tartaglione, Andrea Bragagnolo, Marco Grangetto

Recently, a race towards the simplification of deep networks has begun, showing that it is effectively possible to reduce the size of these models with minimal or no performance loss.

Transfer Learning

Unveiling COVID-19 from Chest X-ray with deep learning: a hurdles race with small data

9 code implementations11 Apr 2020 Enzo Tartaglione, Carlo Alberto Barbano, Claudio Berzovini, Marco Calandri, Marco Grangetto

The possibility to use widespread and simple chest X-ray (CXR) imaging for early screening of COVID-19 patients is attracting much interest from both the clinical and the AI community.

Small Data Image Classification Transfer Learning

Post-synaptic potential regularization has potential

1 code implementation19 Jul 2019 Enzo Tartaglione, Daniele Perlo, Marco Grangetto

Improving generalization is one of the main challenges for training deep neural networks on classification tasks.

Classification Data Augmentation +1

Learning Sparse Neural Networks via Sensitivity-Driven Regularization

no code implementations NeurIPS 2018 Enzo Tartaglione, Skjalg Lepsøy, Attilio Fiandrotti, Gianluca Francini

The ever-increasing number of parameters in deep neural networks poses challenges for memory-limited applications.

On the role of synaptic stochasticity in training low-precision neural networks

no code implementations26 Oct 2017 Carlo Baldassi, Federica Gerace, Hilbert J. Kappen, Carlo Lucibello, Luca Saglietti, Enzo Tartaglione, Riccardo Zecchina

Stochasticity and limited precision of synaptic weights in neural network models are key aspects of both biological and hardware modeling of learning processes.

Cannot find the paper you are looking for? You can Submit a new open access paper.