1 code implementation • 16 Jul 2024 • Yanis Lalou, Théo Gnassounou, Antoine Collas, Antoine de Mathelin, Oleksii Kachaiev, Ambroise Odonnat, Alexandre Gramfort, Thomas Moreau, Rémi Flamary

Unsupervised Domain Adaptation (DA) consists of adapting a model trained on a labeled source domain to perform well on an unlabeled target domain with some data distribution shift.

no code implementations • 4 Jul 2024 • Apolline Mellot, Antoine Collas, Sylvain Chevallier, Alexandre Gramfort, Denis A. Engemann

This variability can induce distribution shifts in the data $X$ and in the biomedical variables of interest $y$, thus limiting the application of supervised machine learning (ML) algorithms.

1 code implementation • 11 Apr 2024 • Julia Linhart, Gabriel Victorino Cardoso, Alexandre Gramfort, Sylvain Le Corff, Pedro L. C. Rodrigues

Determining which parameters of a non-linear model best describe a set of experimental data is a fundamental problem in science and it has gained much traction lately with the rise of complex large-scale simulators.

no code implementations • 7 Mar 2024 • Apolline Mellot, Antoine Collas, Sylvain Chevallier, Denis Engemann, Alexandre Gramfort

Combining electroencephalogram (EEG) datasets for supervised machine learning (ML) is challenging due to session, subject, and device variability.

no code implementations • 24 Jan 2024 • Antoine Collas, Rémi Flamary, Alexandre Gramfort

This paper introduces a novel domain adaptation technique for time series data, called Mixing model Stiefel Adaptation (MSA), specifically addressing the challenge of limited labeled signals in the target dataset.

no code implementations • 1 Dec 2023 • Ambroise Heurtebise, Pierre Ablin, Alexandre Gramfort

Linear Independent Component Analysis (ICA) is a blind source separation technique that has been used in various domains to identify independent latent sources from observed signals.

no code implementations • 11 Sep 2023 • Russell A. Poldrack, Christopher J. Markiewicz, Stefan Appelhoff, Yoni K. Ashar, Tibor Auer, Sylvain Baillet, Shashank Bansal, Leandro Beltrachini, Christian G. Benar, Giacomo Bertazzoli, Suyash Bhogawar, Ross W. Blair, Marta Bortoletto, Mathieu Boudreau, Teon L. Brooks, Vince D. Calhoun, Filippo Maria Castelli, Patricia Clement, Alexander L Cohen, Julien Cohen-Adad, Sasha D'Ambrosio, Gilles de Hollander, María de la iglesia-Vayá, Alejandro de la Vega, Arnaud Delorme, Orrin Devinsky, Dejan Draschkow, Eugene Paul Duff, Elizabeth Dupre, Eric Earl, Oscar Esteban, Franklin W. Feingold, Guillaume Flandin, anthony galassi, Giuseppe Gallitto, Melanie Ganz, Rémi Gau, James Gholam, Satrajit S. Ghosh, Alessio Giacomel, Ashley G Gillman, Padraig Gleeson, Alexandre Gramfort, Samuel Guay, Giacomo Guidali, Yaroslav O. Halchenko, Daniel A. Handwerker, Nell Hardcastle, Peer Herholz, Dora Hermes, Christopher J. Honey, Robert B. Innis, Horea-Ioan Ioanas, Andrew Jahn, Agah Karakuzu, David B. Keator, Gregory Kiar, Balint Kincses, Angela R. Laird, Jonathan C. Lau, Alberto Lazari, Jon Haitz Legarreta, Adam Li, Xiangrui Li, Bradley C. Love, Hanzhang Lu, Camille Maumet, Giacomo Mazzamuto, Steven L. Meisler, Mark Mikkelsen, Henk Mutsaerts, Thomas E. Nichols, Aki Nikolaidis, Gustav Nilsonne, Guiomar Niso, Martin Norgaard, Thomas W Okell, Robert Oostenveld, Eduard Ort, Patrick J. Park, Mateusz Pawlik, Cyril R. Pernet, Franco Pestilli, Jan Petr, Christophe Phillips, Jean-Baptiste Poline, Luca Pollonini, Pradeep Reddy Raamana, Petra Ritter, Gaia Rizzo, Kay A. Robbins, Alexander P. Rockhill, Christine Rogers, Ariel Rokem, Chris Rorden, Alexandre Routier, Jose Manuel Saborit-Torres, Taylor Salo, Michael Schirner, Robert E. Smith, Tamas Spisak, Julia Sprenger, Nicole C. Swann, Martin Szinte, Sylvain Takerkart, Bertrand Thirion, Adam G. Thomas, Sajjad Torabian, Gael Varoquaux, Bradley Voytek, Julius Welzel, Martin Wilson, Tal Yarkoni, Krzysztof J. Gorgolewski

The Brain Imaging Data Structure (BIDS) is a community-driven standard for the organization of data and metadata from a growing range of neuroscience modalities.

no code implementations • 28 Jul 2023 • Bruno Aristimunha, Raphael Y. de Camargo, Walter H. Lopez Pinaya, Sylvain Chevallier, Alexandre Gramfort, Cedric Rommel

While transfer learning is a promising technique to address this challenge, it assumes that transferable data domains and task are known, which is not the case in this setting.

1 code implementation • NeurIPS 2023 • Julia Linhart, Alexandre Gramfort, Pedro L. C. Rodrigues

Building upon the well-known classifier two-sample test (C2ST), we introduce L-C2ST, a new method that allows for a local evaluation of the posterior estimator at any given observation.

1 code implementation • 30 May 2023 • Théo Gnassounou, Rémi Flamary, Alexandre Gramfort

In many machine learning applications on signals and biomedical data, especially electroencephalogram (EEG), one major challenge is the variability of the data across subjects, sessions, and hardware devices.

1 code implementation • 23 Jan 2023 • Omar Chehab, Alexandre Gramfort, Aapo Hyvarinen

Nevertheless, we soberly conclude that the optimal noise may be hard to sample from, and the gain in efficiency can be modest compared to choosing the noise distribution equal to the data's.

no code implementations • 17 Nov 2022 • Julia Linhart, Alexandre Gramfort, Pedro L. C. Rodrigues

Building on the recent trend of new deep generative models known as Normalizing Flows (NF), simulation-based inference (SBI) algorithms can now efficiently accommodate arbitrary complex and high-dimensional data distributions.

no code implementations • 10 Oct 2022 • Guillaume Staerman, Cédric Allain, Alexandre Gramfort, Thomas Moreau

Temporal point processes (TPP) are a natural tool for modeling event-based data.

1 code implementation • 29 Jun 2022 • Cédric Rommel, Joseph Paillard, Thomas Moreau, Alexandre Gramfort

Our experiments also show that there is no single best augmentation strategy, as the good augmentations differ on each task.

3 code implementations • 27 Jun 2022 • Thomas Moreau, Mathurin Massias, Alexandre Gramfort, Pierre Ablin, Pierre-Antoine Bannier, Benjamin Charlier, Mathieu Dagréou, Tom Dupré La Tour, Ghislain Durif, Cassio F. Dantas, Quentin Klopfenstein, Johan Larsson, En Lai, Tanguy Lefort, Benoit Malézieux, Badr Moufad, Binh T. Nguyen, Alain Rakotomamonjy, Zaccharie Ramzi, Joseph Salmon, Samuel Vaiter

Numerical validation is at the core of machine learning research as it allows to assess the actual impact of new methods, and to confirm the agreement between theory and practice.

no code implementations • 3 Jun 2022 • Juliette Millet, Charlotte Caucheteux, Pierre Orhan, Yves Boubenec, Alexandre Gramfort, Ewan Dunbar, Christophe Pallier, Jean-Remi King

These elements, resulting from the largest neuroimaging benchmark to date, show how self-supervised learning can account for a rich organization of speech processing in the brain, and thus delineate a path to identify the laws of language acquisition which shape the human brain.

1 code implementation • 11 Mar 2022 • Hicham Janati, Marco Cuturi, Alexandre Gramfort

These complex datasets, describing dynamics with both time and spatial components, pose new challenges for data analysis.

1 code implementation • 2 Mar 2022 • Omar Chehab, Alexandre Gramfort, Aapo Hyvarinen

Learning a parametric model of a data distribution is a well-known statistical problem that has seen renewed interest as it is brought to scale in deep learning.

1 code implementation • 14 Feb 2022 • Xiaoxi Wei, A. Aldo Faisal, Moritz Grosse-Wentrup, Alexandre Gramfort, Sylvain Chevallier, Vinay Jayaram, Camille Jeunet, Stylianos Bakas, Siegfried Ludwig, Konstantinos Barmpas, Mehdi Bahri, Yannis Panagakis, Nikolaos Laskaris, Dimitrios A. Adamos, Stefanos Zafeiriou, William C. Duong, Stephen M. Gordon, Vernon J. Lawhern, Maciej Śliwowski, Vincent Rouanne, Piotr Tempczyk

Task 2 is centred on Brain-Computer Interfacing (BCI), addressing motor imagery decoding across both subjects and data sets.

1 code implementation • 4 Feb 2022 • Cédric Rommel, Thomas Moreau, Alexandre Gramfort

Practitioners can typically enforce a desired invariance on the trained model through the choice of a network architecture, e. g. using convolutions for translations, or using data augmentation.

no code implementations • ICLR 2022 • Cédric Allain, Alexandre Gramfort, Thomas Moreau

We derive a fast and principled expectation-maximization (EM) algorithm to estimate the parameters of this model.

no code implementations • 28 Nov 2021 • Charlotte Caucheteux, Alexandre Gramfort, Jean-Remi King

Predictive coding theory offers a potential explanation to this discrepancy: while deep language algorithms are optimized to predict adjacent words, the human brain would be tuned to make long-range and hierarchical predictions.

no code implementations • 15 Nov 2021 • Maëliss Jallais, Pedro Luiz Coelho Rodrigues, Alexandre Gramfort, Demian Wassermann

Solving the problem of relating the dMRI signal with cytoarchitectural characteristics calls for the definition of a mathematical model that describes brain tissue via a handful of physiologically-relevant parameters and an algorithm for inverting the model.

1 code implementation • 4 Nov 2021 • Kenan Šehić, Alexandre Gramfort, Joseph Salmon, Luigi Nardi

While Weighted Lasso sparse regression has appealing statistical guarantees that would entail a major real-world impact in finance, genomics, and brain imaging applications, it is typically scarcely adopted due to its complex high-dimensional space composed by thousands of hyperparameters.

1 code implementation • NeurIPS 2021 • Hugo Richard, Pierre Ablin, Bertrand Thirion, Alexandre Gramfort, Aapo Hyvärinen

While ShICA-J is based on second-order statistics, we further propose to leverage non-Gaussianity of the components using a maximum-likelihood method, ShICA-ML, that is both more accurate and more costly.

no code implementations • 12 Oct 2021 • Marc-Andre Schulz, Bertrand Thirion, Alexandre Gramfort, Gaël Varoquaux, Danilo Bzdok

High-quality data accumulation is now becoming ubiquitous in the health domain.

no code implementations • Findings (EMNLP) 2021 • Charlotte Caucheteux, Alexandre Gramfort, Jean-Rémi King

A popular approach to decompose the neural bases of language consists in correlating, across individuals, the brain responses to different stimuli (e. g. regular speech versus scrambled words, sentences, or paragraphs).

no code implementations • ICLR 2022 • Cédric Rommel, Thomas Moreau, Joseph Paillard, Alexandre Gramfort

Data augmentation is a key element of deep learning pipelines, as it informs the network during training about transformations of the input data that keep the label unchanged.

1 code implementation • 27 May 2021 • Hubert Banville, Sean U. N. Wood, Chris Aimone, Denis-Alexander Engemann, Alexandre Gramfort

Building machine learning models using EEG recorded outside of the laboratory setting requires methods robust to noisy data and randomly missing channels.

1 code implementation • 4 May 2021 • Quentin Bertrand, Quentin Klopfenstein, Mathurin Massias, Mathieu Blondel, Samuel Vaiter, Alexandre Gramfort, Joseph Salmon

Finding the optimal hyperparameters of a model can be cast as a bilevel optimization problem, typically solved using zero-order techniques.

1 code implementation • 3 Mar 2021 • Omar Chehab, Alexandre Defossez, Jean-Christophe Loiseau, Alexandre Gramfort, Jean-Remi King

Understanding how the brain responds to sensory inputs is challenging: brain recordings are partial, noisy, and high dimensional; they vary across sessions and subjects and they capture highly nonlinear dynamics.

no code implementations • 2 Mar 2021 • Charlotte Caucheteux, Alexandre Gramfort, Jean-Remi King

The activations of language transformers like GPT-2 have been shown to linearly map onto brain activity during speech comprehension.

no code implementations • 22 Feb 2021 • Hugo Richard, Pierre Ablin, Aapo Hyvärinen, Alexandre Gramfort, Bertrand Thirion

By contrast, we propose Adaptive multiView ICA (AVICA), a noisy ICA model where each view is a linear mixture of shared independent sources with additive noise on the sources.

1 code implementation • NeurIPS 2021 • Pedro L. C. Rodrigues, Thomas Moreau, Gilles Louppe, Alexandre Gramfort

Inferring the parameters of a stochastic model based on experimental observations is central to the scientific method.

no code implementations • 4 Dec 2020 • Pedro L. C. Rodrigues, Alexandre Gramfort

There has been an increasing interest from the scientific community in using likelihood-free inference (LFI) to determine which parameters of a given simulator model could best describe a set of experimental data.

no code implementations • NeurIPS 2020 • Jerome-Alexis Chevalier, Joseph Salmon, Alexandre Gramfort, Bertrand Thirion

To deal with this, we adapt the desparsified Lasso estimator ---an estimator tailored for high dimensional linear model that asymptotically follows a Gaussian distribution under sparsity and moderate feature correlation assumptions--- to temporal data corrupted with autocorrelated noise.

no code implementations • 22 Oct 2020 • Quentin Klopfenstein, Quentin Bertrand, Alexandre Gramfort, Joseph Salmon, Samuel Vaiter

For composite nonsmooth optimization problems, Forward-Backward algorithm achieves model identification (e. g. support identification for the Lasso) after a finite number of iterations, provided the objective function is regular enough.

1 code implementation • 29 Sep 2020 • Jérôme-Alexis Chevalier, Alexandre Gramfort, Joseph Salmon, Bertrand Thirion

To deal with this, we adapt the desparsified Lasso estimator -- an estimator tailored for high dimensional linear model that asymptotically follows a Gaussian distribution under sparsity and moderate feature correlation assumptions -- to temporal data corrupted with autocorrelated noise.

no code implementations • 21 Aug 2020 • Pierre Ablin, Jean-François Cardoso, Alexandre Gramfort

Signals are modelled as a linear mixing of independent sources corrupted by additive noise, where sources and the noise are stationary Gaussian time series.

2 code implementations • 31 Jul 2020 • Hubert Banville, Omar Chehab, Aapo Hyvärinen, Denis-Alexander Engemann, Alexandre Gramfort

Our results suggest that SSL may pave the way to a wider use of deep learning models on EEG data.

1 code implementation • NeurIPS 2020 • Hugo Richard, Luigi Gresele, Aapo Hyvärinen, Bertrand Thirion, Alexandre Gramfort, Pierre Ablin

Group studies involving large cohorts of subjects are important to draw general conclusions about brain functional organization.

3 code implementations • ICML 2020 • Hicham Janati, Marco Cuturi, Alexandre Gramfort

However, entropy brings some inherent smoothing bias, resulting for example in blurred barycenters.

no code implementations • 25 May 2020 • Ronan Perry, Gavin Mischler, Richard Guo, Theodore Lee, Alexander Chang, Arman Koul, Cameron Franz, Hugo Richard, Iain Carmichael, Pierre Ablin, Alexandre Gramfort, Joshua T. Vogelstein

As data are generated more and more from multiple disparate sources, multiview data sets, where each sample has features in distinct views, have ballooned in recent years.

1 code implementation • ICML 2020 • Quentin Bertrand, Quentin Klopfenstein, Mathieu Blondel, Samuel Vaiter, Alexandre Gramfort, Joseph Salmon

Our approach scales to high-dimensional data by leveraging the sparsity of the solutions.

no code implementations • 15 Jan 2020 • Mathurin Massias, Quentin Bertrand, Alexandre Gramfort, Joseph Salmon

In high dimensional sparse regression, pivotal estimators are estimators for which the optimal regularization parameter is independent of the noise level.

1 code implementation • 13 Nov 2019 • Hubert Banville, Isabela Albuquerque, Aapo Hyvärinen, Graeme Moffat, Denis-Alexander Engemann, Alexandre Gramfort

The supervised learning paradigm is limited by the cost - and sometimes the impracticality - of data collection and labeling in multiple domains.

2 code implementations • 9 Oct 2019 • Hicham Janati, Marco Cuturi, Alexandre Gramfort

In this paper, we propose Spatio-Temporal Alignments (STA), a new differentiable formulation of DTW, in which spatial differences between time samples are accounted for using regularized optimal transport (OT).

no code implementations • 3 Oct 2019 • Hicham Janati, Thomas Bazeille, Bertrand Thirion, Marco Cuturi, Alexandre Gramfort

Magnetoencephalography and electroencephalography (M/EEG) are non-invasive modalities that measure the weak electromagnetic fields generated by neural activity.

1 code implementation • 12 Jul 2019 • Mathurin Massias, Samuel Vaiter, Alexandre Gramfort, Joseph Salmon

Generalized Linear Models (GLM) form a wide class of regression and classification models, where prediction is a function of a linear combination of the input variables.

1 code implementation • NeurIPS 2019 • David Sabbagh, Pierre Ablin, Gael Varoquaux, Alexandre Gramfort, Denis A. Engemann

We show that Wasserstein and geometric distances allow perfect out-of-sample prediction on the generative models.

1 code implementation • NeurIPS 2019 • Pierre Ablin, Thomas Moreau, Mathurin Massias, Alexandre Gramfort

We demonstrate that for a large class of unfolded algorithms, if the algorithm converges to the solution of the Lasso, its last layers correspond to ISTA with learned step sizes.

no code implementations • 13 Feb 2019 • Hicham Janati, Thomas Bazeille, Bertrand Thirion, Marco Cuturi, Alexandre Gramfort

Inferring the location of the current sources that generated these magnetic fields is an ill-posed inverse problem known as source imaging.

1 code implementation • NeurIPS 2019 • Quentin Bertrand, Mathurin Massias, Alexandre Gramfort, Joseph Salmon

Sparsity promoting norms are frequently used in high dimensional regression.

1 code implementation • 26 Jan 2019 • Thomas Moreau, Alexandre Gramfort

This algorithm can be used to distribute the computation on a number of workers which scales linearly with the encoded signal's size.

3 code implementations • 16 Jan 2019 • Yannick Roy, Hubert Banville, Isabela Albuquerque, Alexandre Gramfort, Tiago H. Falk, Jocelyn Faubert

To help the field progress, we provide a list of recommendations for future studies and we make our summary table of DL and EEG papers available and invite the community to contribute.

1 code implementation • 7 Dec 2018 • Stanislas Chambon, Valentin Thorey, Pierrick J. Arnal, Emmanuel Mignot, Alexandre Gramfort

The proposed approach, applied here on sleep related micro-architecture events, is inspired by object detectors developed for computer vision such as YOLO and SSD.

Ranked #1 on Sleep Arousal Detection on MESA

1 code implementation • 28 Nov 2018 • Pierre Ablin, Jean-François Cardoso, Alexandre Gramfort

The approximate joint diagonalization of a set of matrices consists in finding a basis in which these matrices are as diagonal as possible.

1 code implementation • 6 Nov 2018 • Pierre Ablin, Dylan Fagot, Herwig Wendt, Alexandre Gramfort, Cédric Févotte

Nonnegative matrix factorization (NMF) is a popular method for audio spectral unmixing.

1 code implementation • 11 Jul 2018 • Stanislas Chambon, Valentin Thorey, Pierrick J. Arnal, Emmanuel Mignot, Alexandre Gramfort

Annotations of such events require a trained sleep expert, a time consuming and tedious process with a large inter-scorer variability.

no code implementations • 25 Jun 2018 • Pierre Ablin, Jean-François Cardoso, Alexandre Gramfort

We study optimization methods for solving the maximum likelihood formulation of independent component analysis (ICA).

1 code implementation • 25 May 2018 • Pierre Ablin, Alexandre Gramfort, Jean-François Cardoso, Francis Bach

We derive an online algorithm for the streaming setting, and an incremental algorithm for the finite sum setting, with the following benefits.

1 code implementation • NeurIPS 2018 • Tom Dupré La Tour, Thomas Moreau, Mainak Jas, Alexandre Gramfort

Frequency-specific patterns of neural activity are traditionally interpreted as sustained rhythmic oscillations, and related to cognitive mechanisms such as attention, high level visual processing or motor control.

1 code implementation • 20 May 2018 • Hicham Janati, Marco Cuturi, Alexandre Gramfort

We argue in this paper that these techniques fail to leverage the spatial information associated to regressors.

1 code implementation • ICML 2018 • Mathurin Massias, Alexandre Gramfort, Joseph Salmon

Here, we propose an extrapolation technique starting from a sequence of iterates in the dual that leads to the construction of improved dual points.

1 code implementation • 29 Nov 2017 • Pierre Ablin, Jean-François Cardoso, Alexandre Gramfort

Independent Component Analysis (ICA) is a technique for unsupervised exploration of multi-channel data widely used in observational sciences.

1 code implementation • 5 Jul 2017 • Stanislas Chambon, Mathieu Galtier, Pierrick Arnal, Gilles Wainrib, Alexandre Gramfort

We introduce here the first deep learning approach for sleep stage classification that learns end-to-end without computing spectrograms or extracting hand-crafted features, that exploits all multivariate and multimodal Polysomnography (PSG) signals (EEG, EMG and EOG), and that can exploit the temporal context of each 30s window of data.

2 code implementations • 25 Jun 2017 • Pierre Ablin, Jean-François Cardoso, Alexandre Gramfort

Independent Component Analysis (ICA) is a technique for unsupervised exploration of multi-channel data that is widely used in observational sciences.

1 code implementation • 27 May 2017 • Mathurin Massias, Olivier Fercoq, Alexandre Gramfort, Joseph Salmon

Results on multimodal neuroimaging problems with M/EEG data are also reported.

no code implementations • NeurIPS 2017 • Mainak Jas, Tom Dupré La Tour, Umut Şimşekli, Alexandre Gramfort

Neural time-series data contain a wide variety of prototypical signal waveforms (atoms) that are of significant importance in clinical and cognitive research.

no code implementations • 19 May 2017 • Laetitia Le, Camille Marini, Alexandre Gramfort, David Nguyen, Mehdi Cherti, Sana Tfaili, Ali Tfayli, Arlette Baillet-Guffroy, Patrice Prognon, Pierre Chaminade, Eric Caudron, Balázs Kégl

Monoclonal antibodies constitute one of the most important strategies to treat patients suffering from cancers such as hematological malignancies and solid tumors.

1 code implementation • 21 Mar 2017 • Mathurin Massias, Alexandre Gramfort, Joseph Salmon

For the Lasso estimator a WS is a set of features, while for a Group Lasso it refers to a set of groups.

1 code implementation • NeurIPS 2016 • Eugene Ndiaye, Olivier Fercoq, Alexandre Gramfort, Joseph Salmon

For statistical learning in high dimension, sparse regularizations have proven useful to boost both computational and statistical efficiency.

1 code implementation • 17 Nov 2016 • Eugene Ndiaye, Olivier Fercoq, Alexandre Gramfort, Joseph Salmon

In high dimensional regression settings, sparsity enforcing penalties have proved useful to regularize the data-fitting term.

no code implementations • 28 Jul 2016 • Daniel Strohmeier, Yousra Bekhti, Jens Haueisen, Alexandre Gramfort

Source imaging based on magnetoencephalography (MEG) and electroencephalography (EEG) allows for the non-invasive analysis of brain activity with high temporal and good spatial resolution.

no code implementations • 20 Jul 2016 • Romain Laby, François Roueff, Alexandre Gramfort

We propose a method that performs anomaly detection and localisation within heterogeneous data using a pairwise undirected mixed graphical model.

2 code implementations • 8 Jun 2016 • Eugene Ndiaye, Olivier Fercoq, Alexandre Gramfort, Vincent Leclère, Joseph Salmon

In high dimensional settings, sparse structures are crucial for efficiency, both in term of memory, computation and performance.

1 code implementation • 19 Feb 2016 • Eugene Ndiaye, Olivier Fercoq, Alexandre Gramfort, Joseph Salmon

We adapt to the case of Sparse-Group Lasso recent safe screening rules that discard early in the solver irrelevant features/groups.

no code implementations • 30 Aug 2015 • Albert Thomas, Vincent Feuillard, Alexandre Gramfort

Our approach makes it possible to tune the hyperparameters automatically and obtain nested set estimates.

no code implementations • NeurIPS 2015 • Eugene Ndiaye, Olivier Fercoq, Alexandre Gramfort, Joseph Salmon

The GAP Safe rule can cope with any iterative solver and we illustrate its performance on coordinate descent for multi-task Lasso, binary and multinomial logistic regression, demonstrating significant speed ups on all tested datasets with respect to previous safe rules.

no code implementations • 13 May 2015 • Olivier Fercoq, Alexandre Gramfort, Joseph Salmon

In this paper, we propose new versions of the so-called $\textit{safe rules}$ for the Lasso.

no code implementations • 30 Mar 2015 • Alexandre Gramfort, Gabriel Peyré, Marco Cuturi

Data are large, the geometry of the brain is complex and the between subjects variability leads to spatially or temporally non-overlapping effects of interest.

1 code implementation • 12 Dec 2014 • Alexandre Abraham, Fabian Pedregosa, Michael Eickenberg, Philippe Gervais, Andreas Muller, Jean Kossaifi, Alexandre Gramfort, Bertrand Thirion, Gäel Varoquaux

Statistical machine learning methods are increasingly used for neuroimaging data analysis.

no code implementations • 11 Aug 2014 • Fabian Pedregosa, Francis Bach, Alexandre Gramfort

We will see that, for a family of surrogate loss functions that subsumes support vector ordinal regression and ORBoosting, consistency can be fully characterized by the derivative of a real-valued function at zero, as happens for convex margin-based surrogates in binary classification.

no code implementations • 27 Feb 2014 • Fabian Pedregosa, Michael Eickenberg, Philippe Ciuciu, Bertrand Thirion, Alexandre Gramfort

We develop a method for the joint estimation of activation and HRF using a rank constraint causing the estimated HRF to be equal across events/conditions, yet permitting it to be different across voxels.

4 code implementations • 1 Sep 2013 • Lars Buitinck, Gilles Louppe, Mathieu Blondel, Fabian Pedregosa, Andreas Mueller, Olivier Grisel, Vlad Niculae, Peter Prettenhofer, Alexandre Gramfort, Jaques Grobler, Robert Layton, Jake Vanderplas, Arnaud Joly, Brian Holt, Gaël Varoquaux

Scikit-learn is an increasingly popular machine learning li- brary.

no code implementations • 10 Aug 2013 • Michael Eickenberg, Fabian Pedregosa, Senoussi Mehdi, Alexandre Gramfort, Bertrand Thirion

Second layer scattering descriptors are known to provide good classification performance on natural quasi-stationary processes such as visual textures due to their sensitivity to higher order moments and continuity with respect to small deformations.

no code implementations • 13 May 2013 • Fabian Pedregosa, Michael Eickenberg, Bertrand Thirion, Alexandre Gramfort

Extracting activation patterns from functional Magnetic Resonance Images (fMRI) datasets remains challenging in rapid-event designs due to the inherent delay of blood oxygen level-dependent (BOLD) signal.

no code implementations • 16 Jan 2013 • Sebastian Hitziger, Maureen Clerc, Alexandre Gramfort, Sandrine Saillet, Christian Bénar, Théodore Papadopoulo

This is the case, for instance, in multi-trial magneto- or electroencephalography (M/EEG).

3 code implementations • 2 Jan 2012 • Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Andreas Müller, Joel Nothman, Gilles Louppe, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, Édouard Duchesnay

Scikit-learn is a Python module integrating a wide range of state-of-the-art machine learning algorithms for medium-scale supervised and unsupervised problems.

no code implementations • NeurIPS 2010 • Gael Varoquaux, Alexandre Gramfort, Jean-Baptiste Poline, Bertrand Thirion

We describe subject-level brain functional connectivity structure as a multivariate Gaussian process and introduce a new strategy to estimate it from group data, by imposing a common structure on the graphical model in the population.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.