no code implementations • 3 Sep 2024 • Gabriela Gómez Jiménez, Demian Wassermann
Through rigorous validation, including 5 sample analyses for out-of-sample analysis, our results demonstrate that the multivariate autoencoder model outperforms traditional methods in capturing and generalizing correlations between brain and behavior beyond the training sample.
no code implementations • 4 Jun 2024 • Luca Ambrogioni, Louis Rouillard, Demian Wassermann
The estimation of directed couplings between the nodes of a network from indirect measurements is a central methodological challenge in scientific fields such as neuroscience, systems biology and economics.
no code implementations • 30 Aug 2023 • Louis Rouillard, Alexandre Le Bris, Thomas Moreau, Demian Wassermann
Given observed data and a probabilistic generative model, Bayesian inference searches for the distribution of the model's parameters that could have yielded the data.
no code implementations • 10 Jun 2022 • Louis Rouillard, Thomas Moreau, Demian Wassermann
Given some observed data and a probabilistic generative model, Bayesian inference aims at obtaining the distribution of a model's latent parameters that could have yielded the data.
no code implementations • 23 Feb 2022 • Gaston Zanitti, Yamil Soto, Valentin Iovene, Maria Vanina Martinez, Ricardo Rodriguez, Gerardo Simari, Demian Wassermann
Researchers in neuroscience have a growing number of datasets available to study the brain, which is made possible by recent technological advances.
no code implementations • 15 Nov 2021 • Maëliss Jallais, Pedro Luiz Coelho Rodrigues, Alexandre Gramfort, Demian Wassermann
Solving the problem of relating the dMRI signal with cytoarchitectural characteristics calls for the definition of a mathematical model that describes brain tissue via a handful of physiologically-relevant parameters and an algorithm for inverting the model.
no code implementations • 4 Oct 2021 • Guillermo Gallardo, Gaston Zanitti, Mat Higger, Sylvain Bouix, Demian Wassermann
Inferring which pathways are affected by a brain lesion is key for both pre and post-treatment planning.
no code implementations • ICLR 2022 • Louis Rouillard, Demian Wassermann
Frequently, population studies feature pyramidally-organized data represented using Hierarchical Bayesian Models (HBM) enriched with plates.
no code implementations • 2 Dec 2020 • Valentin Iovene, Gaston Zanitti, Demian Wassermann
We demonstrate results for two-term conjunctive queries, both on simulated meta-analysis databases and on the widely-used Neurosynth database.
no code implementations • NeurIPS 2020 • Alireza Mehrtash, Purang Abolmaesumi, Polina Golland, Tina Kapur, Demian Wassermann, William M. Wells III
In most experiments, PEP provides a small improvement in performance, and, in some cases, a substantial improvement in empirical calibration.
no code implementations • 5 Mar 2020 • Kamalaker Dadi, Gaël Varoquaux, Antonia Machlouzarides-Shalit, Krzysztof J. Gorgolewski, Demian Wassermann, Bertrand Thirion, Arthur Mensch
We demonstrate the benefits of extracting reduced signals on our fine-grain atlases for many classic functional data analysis pipelines: stimuli decoding from 12, 334 brain responses, standard GLM analysis of fMRI across sessions and individuals, extraction of resting-state functional-connectomes biomarkers for 2, 500 individuals, data compression and meta-analysis over more than 15, 000 statistical maps.
no code implementations • 4 Jun 2018 • Jérôme Dockès, Demian Wassermann, Russell Poldrack, Fabian Suchanek, Bertrand Thirion, Gaël Varoquaux
In this paper, we propose to mine brain medical publications to learn the spatial distribution associated with anatomical terms.
no code implementations • 12 Jan 2017 • Demian Wassermann, Matt Toews, Marc Niethammer, William Wells III
The Bayesian posterior distribution over the deformations aligning a moving and a fixed image is approximated via a variational formulation.
no code implementations • CVPR 2014 • Demian Wassermann, James Ross, George Washko, William M. Wells III, Raul San Jose-Estepar
Our framework relies on a dense tensor field representation which we implement sparsely as a kernel mixture of tensor fields.