no code implementations • 6 Oct 2023 • Zeyu Yun, Juexiao Zhang, Bruno Olshausen, Yann Lecun, Yubei Chen
Unsupervised representation learning has seen tremendous progress but is constrained by its reliance on data modality-specific stationarity and topology, a limitation not found in biological intelligence systems.
no code implementations • 31 Mar 2023 • Marissa Connor, Bruno Olshausen, Christopher Rozell
When interacting in a three dimensional world, humans must estimate 3D structure from visual inputs projected down to two dimensional retinal images.
no code implementations • 20 Dec 2022 • Evgenya Pergament, Pulkit Tandon, Oren Rippel, Lubomir Bourdev, Alexander G. Anderson, Bruno Olshausen, Tsachy Weissman, Sachin Katti, Kedar Tatwawadi
The contributions of this work are threefold: (1) we introduce a web-tool which allows scalable collection of fine-grained perceptual importance, by having users interactively paint spatio-temporal maps over encoded videos; (2) we use this tool to collect a dataset with 178 videos with a total of 14443 frames of human annotated spatio-temporal importance maps over the videos; and (3) we use our curated dataset to train a lightweight machine learning model which can predict these spatio-temporal importance regions.
no code implementations • 15 Oct 2022 • Anthony Zador, Sean Escola, Blake Richards, Bence Ölveczky, Yoshua Bengio, Kwabena Boahen, Matthew Botvinick, Dmitri Chklovskii, Anne Churchland, Claudia Clopath, James DiCarlo, Surya Ganguli, Jeff Hawkins, Konrad Koerding, Alexei Koulakov, Yann Lecun, Timothy Lillicrap, Adam Marblestone, Bruno Olshausen, Alexandre Pouget, Cristina Savin, Terrence Sejnowski, Eero Simoncelli, Sara Solla, David Sussillo, Andreas S. Tolias, Doris Tsao
Neuroscience has long been an essential driver of progress in artificial intelligence (AI).
no code implementations • 30 Sep 2022 • Yubei Chen, Zeyu Yun, Yi Ma, Bruno Olshausen, Yann Lecun
Though there remains a small performance gap between our simple constructive model and SOTA methods, the evidence points to this as a promising direction for achieving a principled and white-box approach to unsupervised learning.
Ranked #1 on Unsupervised MNIST on MNIST
Self-Supervised Learning Sparse Representation-based Classification +3
1 code implementation • 7 Sep 2022 • Sophia Sanborn, Christian Shewmake, Bruno Olshausen, Christopher Hillar
We present a neural network architecture, Bispectral Neural Networks (BNNs) for learning representations that are invariant to the actions of compact commutative groups on the space over which a signal is defined.
1 code implementation • 8 May 2022 • Evgenya Pergament, Pulkit Tandon, Kedar Tatwawadi, Oren Rippel, Lubomir Bourdev, Bruno Olshausen, Tsachy Weissman, Sachin Katti, Alexander G. Anderson
We use this tool to collect data in-the-wild (10 videos, 17 users) and utilize the obtained importance maps in the context of x264 coding to demonstrate that the tool can indeed be used to generate videos which, at the same bitrate, look perceptually better through a subjective study - and are 1. 9 times more likely to be preferred by viewers.
1 code implementation • 11 Dec 2020 • Ho Yin Chau, Frank Qiu, Yubei Chen, Bruno Olshausen
Discrete spatial patterns and their continuous transformations are two important regularities contained in natural signals.
1 code implementation • 30 Sep 2020 • Hong-Ye Hu, Dian Wu, Yi-Zhuang You, Bruno Olshausen, Yubei Chen
In this work, we incorporate the key ideas of renormalization group (RG) and sparse prior distribution to design a hierarchical flow-based generative model, RG-Flow, which can separate information at different scales of images and extract disentangled representations at each scale.
2 code implementations • ICLR 2021 • Dequan Wang, Evan Shelhamer, Shaoteng Liu, Bruno Olshausen, Trevor Darrell
A model must adapt itself to generalize to new and different data during testing.
no code implementations • 8 Aug 2019 • Dequan Wang, Evan Shelhamer, Bruno Olshausen, Trevor Darrell
Given the variety of the visual world there is not one true scale for recognition: objects may appear at drastically different sizes across the visual field.
no code implementations • 26 May 2019 • Shariq Mobin, Bruno Olshausen
Our Attentional Gating Network (AGN) uses a variable attentional context to specify which speakers in the mixture are of interest.
1 code implementation • NeurIPS 2019 • Brian Cheung, Alex Terekhov, Yubei Chen, Pulkit Agrawal, Bruno Olshausen
We present a method for storing multiple models within a single set of parameters.
1 code implementation • 23 Mar 2018 • Shariq Mobin, Brian Cheung, Bruno Olshausen
Recent work has shown that recurrent neural networks can be trained to separate individual speakers in a sound mixture with high fidelity.
no code implementations • 28 Nov 2016 • Brian Cheung, Eric Weiss, Bruno Olshausen
We describe a neural attention model with a learnable retinal sampling lattice.