no code implementations • 27 Apr 2024 • Victor Quétu, Zhu Liao, Enzo Tartaglione
While deep neural networks are highly effective at solving complex tasks, large pre-trained models are commonly employed even to solve consistently simpler downstream tasks, which do not necessarily require a large model's complexity.
no code implementations • 24 Apr 2024 • Alberto Presta, Gabriele Spadaro, Enzo Tartaglione, Attilio Fiandrotti, Marco Grangetto
In Learned Image Compression (LIC), a model is trained at encoding and decoding images sampled from a source domain, often outperforming traditional codecs on natural images; yet its performance may be far from optimal on images sampled from different domains.
no code implementations • 24 Apr 2024 • Zhu Liao, Victor Quétu, Van-Tam Nguyen, Enzo Tartaglione
While deep neural networks are highly effective at solving complex tasks, their computational demands can hinder their usefulness in real-time applications and with limited-resources systems.
no code implementations • 21 Mar 2024 • Rémi Nahon, Ivan Luiz De Moura Matos, Van-Tam Nguyen, Enzo Tartaglione
Nowadays an ever-growing concerning phenomenon, the emergence of algorithmic biases that can lead to unfair models, emerges.
1 code implementation • 19 Dec 2023 • Ziyu Lin, Enzo Tartaglione, Van-Tam Nguyen
On-device training is an emerging approach in machine learning where models are trained on edge devices, aiming to enhance privacy protection and real-time performance.
1 code implementation • 19 Dec 2023 • Carl De Sousa Trias, Mihai Petru Mitrea, Attilio Fiandrotti, Marco Cagnazzo, Sumanta Chaudhuri, Enzo Tartaglione
We advance a method to re-synchronize the order of permuted neurons.
1 code implementation • 14 Dec 2023 • Imad Eddine Marouf, Subhankar Roy, Enzo Tartaglione, Stéphane Lathuilière
In this work, we study the problem of continual learning (CL) where the goal is to learn a model on a sequence of tasks, such that the data from the previous tasks becomes unavailable while learning on the current task data.
no code implementations • 14 Dec 2023 • Maxime Girard, Rémi Nahon, Enzo Tartaglione, Van-Tam Nguyen
In this paper, we explore prior research and introduce a new methodology for classifying mental state levels based on EEG signals utilizing machine learning (ML).
1 code implementation • 8 Dec 2023 • Aël Quélennec, Enzo Tartaglione, Pavlo Mozharovskyi, Van-Tam Nguyen
In the realm of efficient on-device learning under extreme memory and computation constraints, a significant gap in successful approaches persists.
1 code implementation • 7 Nov 2023 • Imad Eddine Marouf, Enzo Tartaglione, Stéphane Lathuilière
Vision Transformers (ViTs) have become one of the dominant architectures in computer vision, and pre-trained ViT models are commonly adapted to new tasks via fine-tuning.
2 code implementations • 17 Oct 2023 • Imad Eddine Marouf, Subhankar Roy, Enzo Tartaglione, Stéphane Lathuilière
However, repeated fine-tuning on each task destroys the rich representations of the PTMs and further leads to forgetting previous tasks.
1 code implementation • 12 Aug 2023 • Zhu Liao, Victor Quétu, Van-Tam Nguyen, Enzo Tartaglione
Pruning is a widely used technique for reducing the size of deep neural networks while maintaining their performance.
1 code implementation • 26 Jul 2023 • Victor Quétu, Marta Milovanovic, Enzo Tartaglione
Vision transformers (ViT) have been of broad interest in recent theoretical and empirical works.
no code implementations • 5 May 2023 • Marta Milovanović, Enzo Tartaglione, Marco Cagnazzo, Félix Henry
Image-based rendering techniques stand at the core of an immersive experience for the user, as they generate novel views given a set of multiple input images.
1 code implementation • ICCV 2023 • Rémi Nahon, Van-Tam Nguyen, Enzo Tartaglione
Despite significant research efforts, deep neural networks are still vulnerable to biases: this raises concerns about their fairness and limits their generalization.
no code implementations • 20 Mar 2023 • Yinghao Wang, Rémi Nahon, Enzo Tartaglione, Pavlo Mozharovskyi, Van-Tam Nguyen
In this paper, we present a new approach to mental state classification from EEG signals by combining signal processing techniques and machine learning (ML) algorithms.
1 code implementation • 2 Mar 2023 • Victor Quétu, Enzo Tartaglione
Second, we introduce an entropy measure providing more insights into the insurgence of this phenomenon and enabling the use of traditional stop criteria.
no code implementations • 26 Feb 2023 • Victor Quétu, Enzo Tartaglione
Very recently, an unexpected phenomenon, the ``double descent'', has caught the attention of the deep learning community.
1 code implementation • 10 Nov 2022 • Carlo Alberto Barbano, Benoit Dufumier, Enzo Tartaglione, Marco Grangetto, Pietro Gori
In this work, we tackle the problem of learning representations that are robust to biases.
no code implementations • 23 Oct 2022 • Chenxi Lola Deng, Enzo Tartaglione
NeRFs have revolutionized the world of per-scene radiance field reconstruction because of their intrinsic compactness.
1 code implementation • 17 Oct 2022 • Olivier Laurent, Adrien Lafage, Enzo Tartaglione, Geoffrey Daniel, Jean-Marc Martinez, Andrei Bursuc, Gianni Franchi
Deep Ensembles (DE) are a prominent approach for achieving excellent performance on key metrics such as accuracy, calibration, uncertainty estimation, and out-of-distribution detection.
1 code implementation • 30 Sep 2022 • Enzo Tartaglione
Deep learning models are nowadays broadly deployed to solve an incredibly large variety of tasks.
no code implementations • 1 Aug 2022 • Daniele Perlo, Enzo Tartaglione, Umberto Gava, Federico D'Agata, Edwin Benninck, Mauro Bergui
The CT perfusion (CTP) is a medical exam for measuring the passage of a bolus of contrast solution through the brain on a pixel-by-pixel basis.
1 code implementation • 19 Jul 2022 • Andrea Bragagnolo, Enzo Tartaglione, Marco Grangetto
Recent advances in deep learning optimization showed that, with some a-posteriori information on fully-trained models, it is possible to match the same performance by simply training a subset of their parameters.
no code implementations • 5 Jul 2022 • Enzo Tartaglione, Francesca Gennari, Marco Grangetto
In this work we propose DisP, an approach for deep learning models disentangling the information related to some classes we desire to keep private, from the data processed by AI.
no code implementations • 26 Apr 2022 • Carlo Alberto Barbano, Enzo Tartaglione, Marco Grangetto
We propose a fully unsupervised debiasing framework, consisting of three steps: first, we exploit the natural preference for learning malignant biases, obtaining a bias-capturing model; then, we perform a pseudo-labelling step to obtain bias labels; finally we employ state-of-the-art supervised debiasing techniques to obtain an unbiased model.
no code implementations • 4 Apr 2022 • Riccardo Renzulli, Enzo Tartaglione, Marco Grangetto
This paper proposes REM, a technique which minimizes the entropy of the parse tree-like structure, improving its explainability.
no code implementations • 24 Feb 2022 • Enzo Tartaglione
Recent advances in deep learning optimization showed that just a subset of parameters are really necessary to successfully train a model.
no code implementations • 12 Jul 2021 • Enzo Tartaglione, Stéphane Lathuilière, Attilio Fiandrotti, Marco Cagnazzo, Marco Grangetto
We formulate the entropy of a quantized artificial neural network as a differentiable function that can be plugged as a regularization term into the cost function minimized by gradient descent.
2 code implementations • CVPR 2021 • Enzo Tartaglione, Carlo Alberto Barbano, Marco Grangetto
Artificial neural networks perform state-of-the-art in an ever-growing number of tasks, and nowadays they are used to solve an incredibly large variety of tasks.
Ranked #1 on HairColor/Unbiased on CelebA
no code implementations • 10 Feb 2021 • Daniele Perlo, Enzo Tartaglione, Luca Bertero, Paola Cassoni, Marco Grangetto
Colorectal cancer is a leading cause of cancer death for both men and women.
1 code implementation • 7 Feb 2021 • Enzo Tartaglione, Andrea Bragagnolo, Francesco Odierna, Attilio Fiandrotti, Marco Grangetto
Deep neural networks include millions of learnable parameters, making their deployment over resource-constrained devices problematic.
1 code implementation • 25 Jan 2021 • Carlo Alberto Barbano, Daniele Perlo, Enzo Tartaglione, Attilio Fiandrotti, Luca Bertero, Paola Cassoni, Marco Grangetto
Histopathological characterization of colorectal polyps allows to tailor patients' management and follow up with the ultimate aim of avoiding or promptly detecting an invasive carcinoma.
Ranked #1 on Colorectal Polyps Characterization on UNITOPATHO
Colorectal Polyps Characterization General Classification +3
no code implementations • 25 Jan 2021 • Carlo Alberto Barbano, Enzo Tartaglione, Claudio Berzovini, Marco Calandri, Marco Grangetto
Early screening of patients is a critical issue in order to assess immediate and fast responses against the spread of COVID-19.
no code implementations • 15 Jan 2021 • Umberto A. Gava, Federico D'Agata, Enzo Tartaglione, Marco Grangetto, Francesca Bertolino, Ambra Santonocito, Edwin Bennink, Mauro Bergui
Methods: Training of the CNN was done on a subset of 100 perfusion data, while 15 samples were used as validation.
no code implementations • 16 Nov 2020 • Enzo Tartaglione, Andrea Bragagnolo, Attilio Fiandrotti, Marco Grangetto
LOBSTER (LOss-Based SensiTivity rEgulaRization) is a method for training neural networks having a sparse topology.
no code implementations • 4 Aug 2020 • Enzo Tartaglione, Marco Grangetto
Artificial neural networks perform state-of-the-art in an ever-growing number of tasks, nowadays they are used to solve an incredibly large variety of tasks.
1 code implementation • 30 Apr 2020 • Enzo Tartaglione, Andrea Bragagnolo, Marco Grangetto
Recently, a race towards the simplification of deep networks has begun, showing that it is effectively possible to reduce the size of these models with minimal or no performance loss.
9 code implementations • 11 Apr 2020 • Enzo Tartaglione, Carlo Alberto Barbano, Claudio Berzovini, Marco Calandri, Marco Grangetto
The possibility to use widespread and simple chest X-ray (CXR) imaging for early screening of COVID-19 patients is attracting much interest from both the clinical and the AI community.
1 code implementation • 19 Jul 2019 • Enzo Tartaglione, Daniele Perlo, Marco Grangetto
Improving generalization is one of the main challenges for training deep neural networks on classification tasks.
no code implementations • NeurIPS 2018 • Enzo Tartaglione, Skjalg Lepsøy, Attilio Fiandrotti, Gianluca Francini
The ever-increasing number of parameters in deep neural networks poses challenges for memory-limited applications.
no code implementations • 26 Oct 2017 • Carlo Baldassi, Federica Gerace, Hilbert J. Kappen, Carlo Lucibello, Luca Saglietti, Enzo Tartaglione, Riccardo Zecchina
Stochasticity and limited precision of synaptic weights in neural network models are key aspects of both biological and hardware modeling of learning processes.