Search Results for author: Thomas Naselaris

Found 8 papers, 4 papers with code

How does the primate brain combine generative and discriminative computations in vision?

no code implementations11 Jan 2024 Benjamin Peters, James J. DiCarlo, Todd Gureckis, Ralf Haefner, Leyla Isik, Joshua Tenenbaum, Talia Konkle, Thomas Naselaris, Kimberly Stachenfeld, Zenna Tavares, Doris Tsao, Ilker Yildirim, Nikolaus Kriegeskorte

The alternative conception is that of vision as an inference process in Helmholtz's sense, where the sensory evidence is evaluated in the context of a generative model of the causal processes giving rise to it.

Brain-optimized inference improves reconstructions of fMRI brain activity

1 code implementation12 Dec 2023 Reese Kneeland, Jordyn Ojeda, Ghislain St-Yves, Thomas Naselaris

At each iteration, we sample a small library of images from an image distribution (a diffusion model) conditioned on a seed reconstruction from the previous iteration.

Brain Decoding

Second Sight: Using brain-optimized encoding models to align image distributions with human brain activity

1 code implementation1 Jun 2023 Reese Kneeland, Jordyn Ojeda, Ghislain St-Yves, Thomas Naselaris

This emphasis belies the fact that there is always a family of images that are equally compatible with any evoked brain activity pattern, and the fact that many image-generators are inherently stochastic and do not by themselves offer a method for selecting the single best reconstruction from among the samples they generate.

Brain Decoding Image Reconstruction

Reconstructing seen images from human brain activity via guided stochastic search

no code implementations30 Apr 2023 Reese Kneeland, Jordyn Ojeda, Ghislain St-Yves, Thomas Naselaris

Past reconstruction algorithms employed brute-force search through a massive library to select candidate images that, when passed through an encoding model, accurately predict brain activity.

Semantic scene descriptions as an objective of human vision

no code implementations23 Sep 2022 Adrien Doerig, Tim C Kietzmann, Emily Allen, Yihan Wu, Thomas Naselaris, Kendrick Kay, Ian Charest

Interpreting the meaning of a visual scene requires not only identification of its constituent objects, but also a rich semantic characterization of object interrelations.

NeuroGen: activation optimized image synthesis for discovery neuroscience

2 code implementations15 May 2021 Zijin Gu, Keith W. Jamison, Meenakshi Khosla, Emily J. Allen, Yihan Wu, Thomas Naselaris, Kendrick Kay, Mert R. Sabuncu, Amy Kuceyeski

NeuroGen combines an fMRI-trained neural encoding model of human vision with a deep generative network to synthesize images predicted to achieve a target pattern of macro-scale brain activation.

Image Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.