1 code implementation • 16 Jul 2024 • Yanis Lalou, Théo Gnassounou, Antoine Collas, Antoine de Mathelin, Oleksii Kachaiev, Ambroise Odonnat, Alexandre Gramfort, Thomas Moreau, Rémi Flamary
Unsupervised Domain Adaptation (DA) consists of adapting a model trained on a labeled source domain to perform well on an unlabeled target domain with some data distribution shift.
no code implementations • 17 Jun 2024 • Guillaume Staerman, Virginie Loison, Thomas Moreau
Physiological signal analysis often involves identifying events crucial to understanding biological dynamics.
1 code implementation • 10 Jun 2024 • Emilia Siviero, Guillaume Staerman, Stephan Clémençon, Thomas Moreau
Many modern spatio-temporal data sets, in sociology, epidemiology or seismology, for example, exhibit self-exciting characteristics, triggering and clustering behaviors both at the same time, that a suitable Hawkes space-time process can accurately capture.
1 code implementation • Journal of Neural Engineering 2024 • Sylvain Chevallier, Igor Carrara, Bruno Aristimunha, Pierre Guetschel, Sara Sedlar, Bruna Lopes, Sebastien Velut, Salim Khazem, Thomas Moreau
The significance of this study lies in its contribution to establishing a rigorous and transparent benchmark for BCI research, offering insights into optimal methodologies and highlighting the importance of reproducibility in driving advancements within the field.
no code implementations • 18 Mar 2024 • Pierre Guetschel, Thomas Moreau, Michael Tangermann
Motivated by the challenge of seamless cross-dataset transfer in EEG signal processing, this article presents an exploratory study on the use of Joint Embedding Predictive Architectures (JEPAs).
no code implementations • CVPR 2024 • Matthieu Terris, Thomas Moreau, Nelly Pustelnik, Julian Tachella
Plug-and-play algorithms constitute a popular framework for solving inverse imaging problems that rely on the implicit definition of an image prior via a denoiser.
no code implementations • 30 Nov 2023 • Matthieu Terris, Thomas Moreau
They are typically trained for a specific task, with a supervised loss to learn a mapping from the observations to the image to recover.
no code implementations • 30 Aug 2023 • Louis Rouillard, Alexandre Le Bris, Thomas Moreau, Demian Wassermann
Given observed data and a probabilistic generative model, Bayesian inference searches for the distribution of the model's parameters that could have yielded the data.
no code implementations • 24 May 2023 • Zaccharie Ramzi, Pierre Ablin, Gabriel Peyré, Thomas Moreau
Implicit deep learning has recently gained popularity with applications ranging from meta-learning to Deep Equilibrium Networks (DEQs).
2 code implementations • 10 Mar 2023 • Clément Bonet, Benoît Malézieux, Alain Rakotomamonjy, Lucas Drumetz, Thomas Moreau, Matthieu Kowalski, Nicolas Courty
When dealing with electro or magnetoencephalography records, many supervised prediction tasks are solved by working with covariance matrices to summarize the signals.
no code implementations • 17 Feb 2023 • Mathieu Dagréou, Thomas Moreau, Samuel Vaiter, Pierre Ablin
Bilevel optimization problems, which are problems where two optimization problems are nested, have more and more applications in machine learning.
no code implementations • 10 Oct 2022 • Guillaume Staerman, Cédric Allain, Alexandre Gramfort, Thomas Moreau
Temporal point processes (TPP) are a natural tool for modeling event-based data.
1 code implementation • 29 Jun 2022 • Cédric Rommel, Joseph Paillard, Thomas Moreau, Alexandre Gramfort
Our experiments also show that there is no single best augmentation strategy, as the good augmentations differ on each task.
3 code implementations • 27 Jun 2022 • Thomas Moreau, Mathurin Massias, Alexandre Gramfort, Pierre Ablin, Pierre-Antoine Bannier, Benjamin Charlier, Mathieu Dagréou, Tom Dupré La Tour, Ghislain Durif, Cassio F. Dantas, Quentin Klopfenstein, Johan Larsson, En Lai, Tanguy Lefort, Benoit Malézieux, Badr Moufad, Binh T. Nguyen, Alain Rakotomamonjy, Zaccharie Ramzi, Joseph Salmon, Samuel Vaiter
Numerical validation is at the core of machine learning research as it allows to assess the actual impact of new methods, and to confirm the agreement between theory and practice.
no code implementations • 10 Jun 2022 • Louis Rouillard, Thomas Moreau, Demian Wassermann
Given some observed data and a probabilistic generative model, Bayesian inference aims at obtaining the distribution of a model's latent parameters that could have yielded the data.
1 code implementation • 4 Feb 2022 • Cédric Rommel, Thomas Moreau, Alexandre Gramfort
Practitioners can typically enforce a desired invariance on the trained model through the choice of a network architecture, e. g. using convolutions for translations, or using data augmentation.
1 code implementation • 31 Jan 2022 • Mathieu Dagréou, Pierre Ablin, Samuel Vaiter, Thomas Moreau
However, computing the gradient of the value function involves solving a linear system, which makes it difficult to derive unbiased stochastic estimates.
no code implementations • ICLR 2022 • Cédric Allain, Alexandre Gramfort, Thomas Moreau
We derive a fast and principled expectation-maximization (EM) algorithm to estimate the parameters of this model.
no code implementations • ICLR 2022 • Cédric Rommel, Thomas Moreau, Joseph Paillard, Alexandre Gramfort
Data augmentation is a key element of deep learning pipelines, as it informs the network during training about transformations of the input data that keep the label unchanged.
1 code implementation • ICLR 2022 • Benoît Malézieux, Thomas Moreau, Matthieu Kowalski
Dictionary learning consists of finding a sparse representation from noisy data and is a common way to encode data-driven prior knowledge on signals.
2 code implementations • ICLR 2022 • Zaccharie Ramzi, Florian Mannel, Shaojie Bai, Jean-Luc Starck, Philippe Ciuciu, Thomas Moreau
In Deep Equilibrium Models (DEQs), the training is performed as a bi-level problem, and its computational complexity is partially driven by the iterative inversion of a huge Jacobian matrix.
1 code implementation • NeurIPS 2021 • Pedro L. C. Rodrigues, Thomas Moreau, Gilles Louppe, Alexandre Gramfort
Inferring the parameters of a stochastic model based on experimental observations is central to the scientific method.
1 code implementation • NeurIPS 2020 • Hamza Cherkaoui, Jeremias Sulam, Thomas Moreau
In this paper, we accelerate such iterative algorithms by unfolding proximal gradient descent solvers in order to learn their parameters for 1D TV regularized problems.
no code implementations • NeurIPS 2020 • Marine Le Morvan, Julie Josses, Thomas Moreau, Erwan Scornet, Gael Varoquaux
We provide an upper bound on the Bayes risk of NeuMiss networks, and show that they have good predictive accuracy with both a number of parameters and a computational complexity independent of the number of missing data patterns.
1 code implementation • 25 Nov 2020 • Clément Lalanne, Maxence Rateaux, Laurent Oudre, Matthieu Robert, Thomas Moreau
The analysis of the Nystagmus waveforms from eye-tracking records is crucial for the clinicial interpretation of this pathological movement.
no code implementations • 19 Oct 2020 • Hamza Cherkaoui, Jeremias Sulam, Thomas Moreau
In this paper, we accelerate such iterative algorithms by unfolding proximal gradient descent solvers in order to learn their parameters for 1D TV regularized problems.
no code implementations • 3 Jul 2020 • Marine Le Morvan, Julie Josse, Thomas Moreau, Erwan Scornet, Gaël Varoquaux
We provide an upper bound on the Bayes risk of NeuMiss networks, and show that they have good predictive accuracy with both a number of parameters and a computational complexity independent of the number of missing data patterns.
no code implementations • ICML 2020 • Pierre Ablin, Gabriel Peyré, Thomas Moreau
In most cases, the minimum has no closed-form, and an approximation is obtained via an iterative algorithm.
1 code implementation • NeurIPS 2019 • Pierre Ablin, Thomas Moreau, Mathurin Massias, Alexandre Gramfort
We demonstrate that for a large class of unfolded algorithms, if the algorithm converges to the solution of the Lasso, its last layers correspond to ISTA with learned step sizes.
1 code implementation • 26 Jan 2019 • Thomas Moreau, Alexandre Gramfort
This algorithm can be used to distribute the computation on a number of workers which scales linearly with the encoded signal's size.
1 code implementation • ICML 2018 • Thomas Moreau, Laurent Oudre, Nicolas Vayatis
In this paper, we introduce DICOD, a convolutional sparse coding algorithm which builds shift invariant representations for long signals.
1 code implementation • NeurIPS 2018 • Tom Dupré La Tour, Thomas Moreau, Mainak Jas, Alexandre Gramfort
Frequency-specific patterns of neural activity are traditionally interpreted as sustained rhythmic oscillations, and related to cognitive mechanisms such as attention, high level visual processing or motor control.
no code implementations • ICLR 2018 • Thomas Moreau, Julien Audiffren
One of the main challenges of deep learning methods is the choice of an appropriate training strategy.
1 code implementation • 2 Jun 2017 • Thomas Moreau, Joan Bruna
Sparse coding is a core building block in many data analysis and machine learning pipelines.
no code implementations • 29 May 2017 • Thomas Moreau, Laurent Oudre, Nicolas Vayatis
In this paper, we introduce DICOD, a convolutional sparse coding algorithm which builds shift invariant representations for long signals.
1 code implementation • 14 Nov 2016 • Thomas Moreau, Julien Audiffren
One of the main challenges of deep learning methods is the choice of an appropriate training strategy.
1 code implementation • 1 Sep 2016 • Thomas Moreau, Joan Bruna
Sparse coding is a core building block in many data analysis and machine learning pipelines.