no code implementations • 27 Nov 2023 • Edouard Yvinec, Arnaud Dapogny, Kevin Bailly
However, such techniques suffer from their lack of adaptability to the target devices, as a hardware typically only support specific bit widths.
no code implementations • 17 Nov 2023 • Rémi Ouazan Reboul, Edouard Yvinec, Arnaud Dapogny, Kevin Bailly
To solve this problem, a popular solution is DNN pruning, and more so structured pruning, where coherent computational blocks (e. g. channels for convolutional networks) are removed: as an exhaustive search of the space of pruned sub-models is intractable in practice, channels are typically removed iteratively based on an importance estimation heuristic.
no code implementations • 29 Sep 2023 • Edouard Yvinec, Arnaud Dapogny, Kevin Bailly
However, this led to an increase in the memory footprint, to a point where it can be challenging to simply load a model on commodity devices such as mobile phones.
no code implementations • 11 Sep 2023 • Eden Belouadah, Arnaud Dapogny, Kevin Bailly
The main challenge of incremental learning is catastrophic forgetting, the inability of neural networks to retain past knowledge when learning a new one.
no code implementations • 15 Aug 2023 • Edouard Yvinec, Arnaud Dapogny, Kevin Bailly
GPTQ essentially consists in learning the rounding operation using a small calibration set.
no code implementations • 10 Aug 2023 • Edouard Yvinec, Arnaud Dapogny, Kevin Bailly
However, the optimization of the exponent parameter and weight values remains a challenging and novel problem which could not be solved with previous post training optimization techniques which only learn to round up or down weight values in order to preserve the predictive function.
no code implementations • 9 Aug 2023 • Edouard Yvinec, Arnaud Dapogny, Kevin Bailly, Xavier Fischer
In this work, we propose to investigate DNN layer importance, i. e. to estimate the sensitivity of the accuracy w. r. t.
no code implementations • 30 Jun 2023 • Edouard Yvinec, Arnaud Dapogny, Kevin Bailly
We show experimentally that our approach allows to significantly improve the performance of ternary quantization through a variety of scenarios in DFQ, PTQ and QAT and give strong insights to pave the way for future research in deep neural network quantization.
no code implementations • 21 Mar 2023 • Gauthier Tallec, Edouard Yvinec, Arnaud Dapogny, Kevin Bailly
The rising performance of deep neural networks is often empirically attributed to an increase in the available computational power, which allows complex models to be trained upon large amounts of annotated data.
no code implementations • 6 Mar 2023 • Gauthier Tallec, Arnaud Dapogny, Kevin Bailly
However, applying label smoothing as it is may aggravate imbalance-based pre-existing under-confidence issue and degrade performance.
no code implementations • 24 Jan 2023 • Edouard Yvinec, Arnaud Dapogny, Matthieu Cord, Kevin Bailly
In this paper, we identity the uniformity of the quantization operator as a limitation of existing approaches, and propose a data-free non-uniform method.
no code implementations • 6 Aug 2022 • Gauthier Tallec, Jules Bonnard, Arnaud Dapogny, Kévin Bailly
From a learning point of view we use an uncertainty weighted loss for modelling the difference of stochasticity between the three tasks annotations.
no code implementations • 8 Jul 2022 • Edouard Yvinec, Arnaud Dapogny, Matthieu Cord, Kevin Bailly
The leap in performance in state-of-the-art computer vision methods is attributed to the development of deep neural networks.
no code implementations • 28 Mar 2022 • Edouard Yvinec, Arnaud Dapogny, Matthieu Cord, Kevin Bailly
Computationally expensive neural networks are ubiquitous in computer vision and solutions for efficient inference have drawn a growing attention in the machine learning community.
1 code implementation • 28 Mar 2022 • Edouard Yvinec, Arnaud Dapogny, Kevin Bailly
Batch-Normalization (BN) layers have become fundamental components in the evermore complex deep neural network architectures.
no code implementations • 24 Mar 2022 • Jules Bonnard, Arnaud Dapogny, Ferdinand Dhombres, Kévin Bailly
Facial Expression Recognition (FER) is crucial in many research domains because it enables machines to better understand human behaviours.
Facial Expression Recognition Facial Expression Recognition (FER)
no code implementations • 23 Mar 2022 • Gauthier Tallec, Edouard Yvinec, Arnaud Dapogny, Kevin Bailly
Action Unit (AU) Detection is the branch of affective computing that aims at recognizing unitary facial muscular movements.
no code implementations • 1 Feb 2022 • Gauthier Tallec, Arnaud Dapogny, Kevin Bailly
MONET uses a differentiable order selection to jointly learn task-wise modules with their optimal chaining order.
no code implementations • NeurIPS 2021 • Edouard Yvinec, Arnaud Dapogny, Matthieu Cord, Kevin Bailly
Deep Neural Networks (DNNs) are ubiquitous in today's computer vision landscape, despite involving considerable computational costs.
no code implementations • 30 Sep 2021 • Edouard Yvinec, Arnaud Dapogny, Matthieu Cord, Kevin Bailly
Pruning Deep Neural Networks (DNNs) is a prominent field of study in the goal of inference runtime acceleration.
1 code implementation • 29 Jun 2021 • Arthur Douillard, Yifu Chen, Arnaud Dapogny, Matthieu Cord
classes predicted by the old model to deal with background shift and avoid catastrophic forgetting of the old classes.
Ranked #6 on Overlapped 15-1 on PASCAL VOC 2012
Class Incremental Learning Continual Semantic Segmentation +5
no code implementations • 31 May 2021 • Edouard Yvinec, Arnaud Dapogny, Matthieu Cord, Kevin Bailly
Deep Neural Networks (DNNs) are ubiquitous in today's computer vision land-scape, despite involving considerable computational costs.
1 code implementation • CVPR 2021 • Arthur Douillard, Yifu Chen, Arnaud Dapogny, Matthieu Cord
classes predicted by the old model to deal with background shift and avoid catastrophic forgetting of the old classes.
Ranked #1 on Domain 11-5 on Cityscapes val
Class Incremental Learning Continual Semantic Segmentation +16
no code implementations • 15 Oct 2020 • Estephe Arnaud, Arnaud Dapogny, Kevin Bailly
Thus, the exogenous information is used two times in a throwable fashion, first as a conditioning variable for the target task, and second to create invariance within the endogenous representation.
Facial Expression Recognition Facial Expression Recognition (FER)
no code implementations • 15 Apr 2020 • Edouard Yvinec, Arnaud Dapogny, Kévin Bailly
In this paper, we introduce a deep, end-to-end trainable ensemble of heatmap-based weak predictors for 2D/3D gaze estimation.
no code implementations • 14 Apr 2020 • Arnaud Dapogny, Kévin Bailly, Matthieu Cord
Head pose estimation and face alignment constitute a backbone preprocessing for many applications relying on face analysis.
no code implementations • 21 Oct 2019 • Estephe Arnaud, Arnaud Dapogny, Kevin Bailly
Face alignment consists of aligning a shape model on a face image.
no code implementations • 7 Jul 2019 • Estephe Arnaud, Arnaud Dapogny, Kevin Bailly
Face alignment consists in aligning a shape model on a face in an image.
no code implementations • 6 May 2019 • Yifu Chen, Arnaud Dapogny, Matthieu Cord
As a result, the predictions outputted by such networks usually struggle to accurately capture the object boundaries and exhibit holes inside the objects.
no code implementations • 6 May 2019 • Arnaud Dapogny, Matthieu Cord, Patrick Perez
Image completion is the problem of generating whole images from fragments only.
no code implementations • ICCV 2019 • Arnaud Dapogny, Kévin Bailly, Matthieu Cord
Face Alignment is an active computer vision domain, that consists in localizing a number of facial landmarks that vary across datasets.
Ranked #22 on Face Alignment on WFLW
no code implementations • 5 Mar 2017 • Arnaud Dapogny, Kévin Bailly, Séverine Dubuisson
GNF appears as an ideal regressor for face alignment, as it combines differentiability, high expressivity and fast evaluation runtime.
no code implementations • 21 Jul 2016 • Arnaud Dapogny, Kévin Bailly, Séverine Dubuisson
Furthermore, labelling expressions is a time-consuming process that is prone to subjectivity, thus the variability may not be fully covered by the training data.
no code implementations • 21 Jul 2016 • Arnaud Dapogny, Kévin Bailly, Séverine Dubuisson
As such, our approach appears as a natural extension of Random Forests for learning spatio-temporal patterns, potentially from multiple viewpoints.
Facial Expression Recognition Facial Expression Recognition (FER) +1
no code implementations • ICCV 2015 • Arnaud Dapogny, Kevin Bailly, Severine Dubuisson
Facial expression can be seen as the dynamic variation of one's appearance over time.
Facial Expression Recognition Facial Expression Recognition (FER)