no code implementations • 13 Feb 2023 • Yena Han, Tomaso Poggio, Brian Cheung
The networks are compared to recordings of biological neurons, and good performance in reproducing neural responses is considered to support the model's validity.
no code implementations • 22 Dec 2021 • Ileana Rugina, Rumen Dangovski, Mark Veillette, Pooya Khorrami, Brian Cheung, Olga Simek, Marin Soljačić
In recent years, emerging fields such as meta-learning or self-supervised learning have been closing the gap between proof-of-concept results and real-life applications of machine learning by extending deep-learning to the semi-supervised and few-shot domains.
2 code implementations • 28 Oct 2021 • Rumen Dangovski, Li Jing, Charlotte Loh, Seungwook Han, Akash Srivastava, Brian Cheung, Pulkit Agrawal, Marin Soljačić
In state-of-the-art self-supervised learning (SSL) pre-training produces semantically good representations by encouraging them to be invariant under meaningful transformations prescribed from human knowledge.
no code implementations • ICLR 2022 • Rumen Dangovski, Li Jing, Charlotte Loh, Seungwook Han, Akash Srivastava, Brian Cheung, Pulkit Agrawal, Marin Soljacic
In state-of-the-art self-supervised learning (SSL) pre-training produces semantically good representations by encouraging them to be invariant under meaningful transformations prescribed from human knowledge.
no code implementations • 15 Jul 2021 • Jiayun Wang, Yubei Chen, Stella X. Yu, Brian Cheung, Yann Lecun
We propose a drastically different approach to compact and optimal deep learning: We decouple the Degrees of freedom (DoF) and the actual number of parameters of a model, optimize a small DoF with predefined random linear constraints for a large model of arbitrary architecture, in one-stage end-to-end learning.
Ranked #95 on
Image Classification
on ObjectNet
(using extra training data)
1 code implementation • 18 Mar 2021 • Minyoung Huh, Hossein Mobahi, Richard Zhang, Brian Cheung, Pulkit Agrawal, Phillip Isola
We show empirically that our claim holds true on finite width linear and non-linear models on practical learning paradigms and show that on natural data, these are often the solutions that generalize well.
1 code implementation • ICML 2020 • Jesse Zhang, Brian Cheung, Chelsea Finn, Sergey Levine, Dinesh Jayaraman
Reinforcement learning (RL) in real-world safety-critical target settings like urban driving is hazardous, imperiling the RL agent, other agents, and the environment.
no code implementations • 9 Oct 2019 • Juexiao Zhang, Yubei Chen, Brian Cheung, Bruno A. Olshausen
Co-occurrence statistics based word embedding techniques have proved to be very useful in extracting the semantic and syntactic representation of words as low dimensional continuous vectors.
no code implementations • 25 Sep 2019 • Jesse Zhang, Brian Cheung, Chelsea Finn, Dinesh Jayaraman, Sergey Levine
We study the problem of safe adaptation: given a model trained on a variety of past experiences for some task, can this model learn to perform that task in a new situation while avoiding catastrophic failure?
no code implementations • ICLR 2019 • Luke Metz, Niru Maheswaranathan, Brian Cheung, Jascha Sohl-Dickstein
Here, our desired task (meta-objective) is the performance of the representation on semi-supervised classification, and we meta-learn an algorithm -- an unsupervised weight update rule -- that produces representations that perform well under this meta-objective.
1 code implementation • NeurIPS 2019 • Brian Cheung, Alex Terekhov, Yubei Chen, Pulkit Agrawal, Bruno Olshausen
We present a method for storing multiple models within a single set of parameters.
2 code implementations • ICLR 2019 • Luke Metz, Niru Maheswaranathan, Brian Cheung, Jascha Sohl-Dickstein
Specifically, we target semi-supervised classification performance, and we meta-learn an algorithm -- an unsupervised weight update rule -- that produces representations useful for this task.
1 code implementation • 23 Mar 2018 • Shariq Mobin, Brian Cheung, Bruno Olshausen
Recent work has shown that recurrent neural networks can be trained to separate individual speakers in a sound mixture with high fidelity.
no code implementations • NeurIPS 2018 • Gamaleldin F. Elsayed, Shreya Shankar, Brian Cheung, Nicolas Papernot, Alex Kurakin, Ian Goodfellow, Jascha Sohl-Dickstein
Machine learning models are vulnerable to adversarial examples: small changes to images can cause computer vision models to make mistakes such as identifying a school bus as an ostrich.
no code implementations • 28 Nov 2016 • Brian Cheung, Eric Weiss, Bruno Olshausen
We describe a neural attention model with a learnable retinal sampling lattice.
no code implementations • arXiv 2015 • Brian Cheung, Jesse A. Livezey, Arjun K. Bansal, Bruno A. Olshausen
Deep learning has enjoyed a great deal of success because of its ability to learnuseful features for tasks such as classification.
1 code implementation • 20 Dec 2014 • Brian Cheung, Jesse A. Livezey, Arjun K. Bansal, Bruno A. Olshausen
Deep learning has enjoyed a great deal of success because of its ability to learn useful features for tasks such as classification.
no code implementations • 31 Jul 2013 • Bryan R. Conroy, Jennifer M. Walz, Brian Cheung, Paul Sajda
We present an efficient algorithm for simultaneously training sparse generalized linear models across many related problems, which may arise from bootstrapping, cross-validation and nonparametric permutation testing.