no code implementations • 30 Nov 2023 • Jonathan Huml, Abiy Tasissa, Demba Ba
We propose an autoencoder architecture (WLSC) whose latent representations are implicitly, locally organized for spectral clustering through a Laplacian quadratic form of a bipartite graph, which generates a diverse set of artificial receptive fields that match primate data in V1 as faithfully as recent contrastive frameworks like Local Low Dimensionality, or LLD \citep{lld} that discard sparse dictionary learning.
no code implementations • 22 Feb 2023 • Jonathan Huml, Abiy Tasissa, Demba Ba
The classical sparse coding model represents visual stimuli as a linear combination of a handful of learned basis functions that are Gabor-like when trained on natural image data.