no code implementations • 12 Nov 2024 • Binxu Wang, Jiaqi Shang, Haim Sompolinsky
We evaluated their ability to generate structurally consistent samples and perform panel completion via unconditional and conditional sampling.
no code implementations • 17 Nov 2023 • Binxu Wang, John J. Vastola
We claimed that for well-trained diffusion models, the learned score at a high noise scale is well approximated by the linear score of Gaussian.
no code implementations • 4 Mar 2023 • Binxu Wang, John J. Vastola
How do diffusion generative models convert pure noise into meaningful images?
no code implementations • 26 Dec 2022 • Binxu Wang, Carlos R. Ponce
This shows the power of level sets as a conceptual tool to understand neuronal activations over image space.
1 code implementation • 14 Apr 2022 • Binxu Wang, Carlos R. Ponce
To find these patterns, we have used black-box optimizers to search a 4096d image space, leading to the evolution of images that maximize neuronal responses.
1 code implementation • NeurIPS Workshop SVRHM 2021 • Binxu Wang, David Mayo, Arturo Deza, Andrei Barbu, Colin Conwell
Critically, we find that random cropping can be substituted by cortical magnification, and saccade-like sampling of the image could also assist the representation learning.
1 code implementation • 15 Jan 2021 • Binxu Wang, Carlos R. Ponce
Our results illustrate that defining the geometry of the GAN image manifold can serve as a general framework for understanding GANs.
1 code implementation • ICLR 2021 • Binxu Wang, Carlos R Ponce
We show that the use of this metric allows for more efficient optimization in the latent space (e. g. GAN inversion) and facilitates unsupervised discovery of interpretable axes.