no code implementations • 10 May 2025 • Hope Lutwak, Bas Rokers, Eero P. Simoncelli
Consistent with the hypothesis, we found that performance depended on the deviation of the object velocity from the constraint segment, rather than a difference between retinal velocities of the object and its local surround.
1 code implementation • 4 Nov 2024 • Xueyan Niu, Cristina Savin, Eero P. Simoncelli
Moreover, these beneficial properties can be transferred to other training procedures by using the straightening objective as a regularizer, suggesting a broader utility for straightening as a principle for robust unsupervised learning.
no code implementations • 30 Oct 2024 • Pierre-Étienne H. Fiquet, Eero P. Simoncelli
Furthermore, analysis of networks trained on natural image sequences reveals that the representation automatically weights predictive evidence by its reliability, which is a hallmark of statistical inference
no code implementations • 20 Oct 2024 • Jenelle Feather, David Lipshutz, Sarah E. Harvey, Alex H. Williams, Eero P. Simoncelli
This metric may then be used to optimally differentiate a set of models, by finding a pair of "principal distortions" that maximize the variance of the models under this metric.
no code implementations • 15 Oct 2024 • Zahra Kadkhodaie, Stéphane Mallat, Eero P. Simoncelli
We demonstrate that the algorithm can generate high quality and diverse samples from the conditioning class.
1 code implementation • 28 May 2024 • David Lipshutz, Eero P. Simoncelli
The circuit, which is comprised of primary neurons that are recurrently connected to a set of local interneurons, continuously optimizes this objective by dynamically adjusting both the synaptic connections between neurons as well as the interneuron activation functions.
1 code implementation • 22 May 2024 • Ling-Qi Zhang, Zahra Kadkhodaie, Eero P. Simoncelli, David H. Brainard
To exploit such structure, we introduce a general method for obtaining an optimized set of linear measurements for efficient image reconstruction, where the signal statistics are expressed by the prior implicit in a neural network trained to perform denoising (known as a ``diffusion model'').
no code implementations • 18 Dec 2023 • Nikhil Parthasarathy, Olivier J. Hénaff, Eero P. Simoncelli
Finally, when the two-stage model is used as a fixed front-end for a deep network trained to perform object recognition, the resultant model (LCL-V2Net) is significantly better than standard end-to-end self-supervised, supervised, and adversarially-trained models in terms of generalization to out-of-distribution tasks and alignment with human behavior.
1 code implementation • 4 Oct 2023 • Zahra Kadkhodaie, Florentin Guth, Eero P. Simoncelli, Stéphane Mallat
Finally, we show that when trained on regular image classes for which the optimal basis is known to be geometry-adaptive and harmonic, the denoising performance of the networks is near-optimal.
1 code implementation • NeurIPS 2023 • Lyndon R. Duong, Eero P. Simoncelli, Dmitri B. Chklovskii, David Lipshutz
Neurons in early sensory areas rapidly adapt to changing sensory statistics, both by normalizing the variance of their individual responses and by reducing correlations between their responses.
no code implementations • 31 May 2023 • Lyndon R. Duong, Colin Bredenberg, David J. Heeger, Eero P. Simoncelli
Using published V1 population adaptation data, we show that propagation of single neuron gain changes in a recurrent network is sufficient to capture the entire set of observed adaptation effects.
1 code implementation • 27 Jan 2023 • Lyndon R. Duong, David Lipshutz, David J. Heeger, Dmitri B. Chklovskii, Eero P. Simoncelli
Statistical whitening transformations play a fundamental role in many computational systems, and may also play an important role in biological sensory systems.
no code implementations • NeurIPS 2021 • Sreyas Mohan, Joshua L. Vincent, Ramon Manzorro, Peter A. Crozier, Eero P. Simoncelli, Carlos Fernandez-Granda
Deep convolutional neural networks (CNNs) for image denoising are usually trained on large datasets.
no code implementations • 19 Jan 2021 • Joshua L. Vincent, Ramon Manzorro, Sreyas Mohan, Binh Tang, Dev Y. Sheth, Eero P. Simoncelli, David S. Matteson, Carlos Fernandez-Granda, Peter A. Crozier
This shows that the network exploits global and local information in the noisy measurements, for example, by adapting its filtering approach when it encounters atomic-level defects at the nanoparticle surface.
Denoising
Materials Science
Image and Video Processing
1 code implementation • ICCV 2021 • Dev Yashpal Sheth, Sreyas Mohan, Joshua L. Vincent, Ramon Manzorro, Peter A. Crozier, Mitesh M. Khapra, Eero P. Simoncelli, Carlos Fernandez-Granda
This is advantageous because motion compensation is computationally expensive, and can be unreliable when the input data are noisy.
Ranked #5 on
Video Denoising
on Set8 sigma40
1 code implementation • 24 Oct 2020 • Sreyas Mohan, Ramon Manzorro, Joshua L. Vincent, Binh Tang, Dev Yashpal Sheth, Eero P. Simoncelli, David S. Matteson, Peter A. Crozier, Carlos Fernandez-Granda
SBD outperforms existing techniques by a wide margin on a simulated benchmark dataset, as well as on real data.
1 code implementation • 27 Jul 2020 • Zahra Kadkhodaie, Eero P. Simoncelli
Here, we develop a robust and general methodology for making use of this implicit prior.
no code implementations • 30 Jun 2020 • Nikhil Parthasarathy, Eero P. Simoncelli
These responses are processed by a second stage (analogous to cortical area V2) consisting of convolutional filters followed by half-wave rectification and pooling to generate V2 'complex cell' responses.
1 code implementation • 4 May 2020 • Keyan Ding, Kede Ma, Shiqi Wang, Eero P. Simoncelli
The performance of objective image quality assessment (IQA) models has been evaluated primarily by comparing model predictions to human quality judgments.
2 code implementations • 16 Apr 2020 • Keyan Ding, Kede Ma, Shiqi Wang, Eero P. Simoncelli
Objective measures of image quality generally operate by comparing pixels of a "degraded" image to those of the original.
Ranked #31 on
Video Quality Assessment
on MSU SR-QA Dataset
no code implementations • NeurIPS Workshop Deep_Invers 2019 • Zahra Kadkhodaie, Sreyas Mohan, Eero P. Simoncelli, Carlos Fernandez-Granda
Here, however, we show that bias terms used in most CNNs (additive constants, including those used for batch normalization) interfere with the interpretability of these networks, do not help performance, and in fact prevent generalization of performance to noise levels not including in the training data.
1 code implementation • ICLR 2020 • Sreyas Mohan, Zahra Kadkhodaie, Eero P. Simoncelli, Carlos Fernandez-Granda
In contrast, a bias-free architecture -- obtained by removing the constant terms in every layer of the network, including those used for batch normalization-- generalizes robustly across noise levels, while preserving state-of-the-art performance within the training range.
no code implementations • NeurIPS 2017 • Alexander Berardino, Johannes Ballé, Valero Laparra, Eero P. Simoncelli
We develop a method for comparing hierarchical image representations in terms of their ability to explain perceptual sensitivity in humans.
no code implementations • 23 Jan 2017 • Valero Laparra, Alex Berardino, Johannes Ballé, Eero P. Simoncelli
We develop a framework for rendering photographic images, taking into account display limitations, so as to optimize perceptual similarity between the rendered image and the original scene.
14 code implementations • 5 Nov 2016 • Johannes Ballé, Valero Laparra, Eero P. Simoncelli
We describe an image compression method, consisting of a nonlinear analysis transformation, a uniform quantizer, and a nonlinear synthesis transformation.
no code implementations • 18 Jul 2016 • Johannes Ballé, Valero Laparra, Eero P. Simoncelli
We introduce a general framework for end-to-end optimization of the rate--distortion performance of nonlinear transform codes assuming scalar quantization.
no code implementations • 4 Jan 2016 • Elad Ganmor, Michael Krumin, Luigi F. Rossi, Matteo Carandini, Eero P. Simoncelli
This approach can be used to estimate average firing rates or tuning curves directly from the imaging data, and is sufficiently flexible to incorporate prior knowledge about tuning structure.
no code implementations • 19 Nov 2015 • Olivier J. Hénaff, Eero P. Simoncelli
We develop a new method for visualizing and refining the invariances of learned representations.
2 code implementations • 19 Nov 2015 • Johannes Ballé, Valero Laparra, Eero P. Simoncelli
The data are linearly transformed, and each component is then normalized by a pooled activity measure, computed by exponentiating a weighted sum of rectified and exponentiated components and a constant.
no code implementations • 6 Jul 2015 • Neil C. Rabinowitz, Robbe L. T. Goris, Johannes Ballé, Eero P. Simoncelli
Neural responses are highly variable, and some portion of this variability arises from fluctuations in modulatory factors that alter their gain, such as adaptation, attention, arousal, expected or actual reward, emotion, and local metabolic resource availability.
no code implementations • 20 Dec 2014 • Olivier J. Hénaff, Johannes Ballé, Neil C. Rabinowitz, Eero P. Simoncelli
We develop a new statistical model for photographic images, in which the local responses of a bank of linear filters are described as jointly Gaussian, with zero mean and a covariance that varies slowly over spatial position.
no code implementations • NeurIPS 2012 • Brett Vintch, Andrew Zaharia, J Movshon, Eero P. Simoncelli
Many visual and auditory neurons have response properties that are well explained by pooling the rectified responses of a set of self-similar linear filters.
no code implementations • NeurIPS 2012 • Yan Karklin, Chaitanya Ekanadham, Eero P. Simoncelli
We develop a probabilistic generative model for representing acoustic event structure at multiple scales via a two-stage hierarchy.
no code implementations • NeurIPS 2011 • Chaitanya Ekanadham, Daniel Tranchina, Eero P. Simoncelli
Most current methods are based on clustering, which requires substantial human supervision and produces systematic errors by failing to properly handle temporally overlapping spikes.
no code implementations • NeurIPS 2010 • Deep Ganguli, Eero P. Simoncelli
Here we consider the influence of a prior probability distribution over sensory variables on the optimal allocation of cells and spikes in a neural population.
no code implementations • NeurIPS 2009 • Matthias Bethge, Eero P. Simoncelli, Fabian H. Sinz
We introduce a new family of distributions, called $L_p${\em -nested symmetric distributions}, whose densities access the data exclusively through a hierarchical cascade of $L_p$-norms.
no code implementations • NeurIPS 2008 • Siwei Lyu, Eero P. Simoncelli
In this case, no linear transform suffices to properly decompose the signal into independent components, but we show that a simple nonlinear transformation, which we call radial Gaussianization (RG), is able to remove all dependencies.
no code implementations • NeurIPS 2007 • Alan A. Stocker, Eero P. Simoncelli
We propose an extended probabilistic model for human perception.
no code implementations • NeurIPS 2001 • Odelia Schwartz, E.J. Chichilnisky, Eero P. Simoncelli
Spike-triggered averaging techniques are effective for linear characterization of neural responses.