no code implementations • 11 Jun 2024 • Max Dabagia, Daniel Mitropolsky, Christos H. Papadimitriou, Santosh S. Vempala
How intelligence arises from the brain is a central problem in science.
no code implementations • 6 Jun 2023 • Max Dabagia, Christos H. Papadimitriou, Santosh S. Vempala
Here we show that, in the same model, time can be captured naturally as precedence through synaptic weights and plasticity, and, as a result, a range of computations on sequences of assemblies can be carried out.
1 code implementation • 10 Jun 2022 • Ran Liu, Mehdi Azabou, Max Dabagia, Jingyun Xiao, Eva L. Dyer
By enabling flexible pre-training that can be transferred to neural recordings of different size and order, our work provides a first step towards creating a foundation model for neural decoding.
no code implementations • 17 May 2022 • Max Dabagia, Konrad P Kording, Eva L Dyer
One major challenge that we face in modern neuroscience is that of correspondence, e. g. we do not record the exact same neurons at the exact same times.
1 code implementation • NeurIPS 2021 • Ran Liu, Mehdi Azabou, Max Dabagia, Chi-Heng Lin, Mohammad Gheshlaghi Azar, Keith B. Hengen, Michal Valko, Eva L. Dyer
Our approach combines a generative modeling framework with an instance-specific alignment loss that tries to maximize the representational similarity between transformed views of the input (brain state).
1 code implementation • 7 Oct 2021 • Max Dabagia, Christos H. Papadimitriou, Santosh S. Vempala
Here we present such a mechanism, and prove rigorously that, for simple classification problems defined on distributions of labeled assemblies, a new assembly representing each class can be reliably formed in response to a few stimuli from the class; this assembly is henceforth reliably recalled in response to new stimuli from the same class.
1 code implementation • 19 Feb 2021 • Mehdi Azabou, Mohammad Gheshlaghi Azar, Ran Liu, Chi-Heng Lin, Erik C. Johnson, Kiran Bhaskaran-Nair, Max Dabagia, Bernardo Avila-Pires, Lindsey Kitchell, Keith B. Hengen, William Gray-Roncal, Michal Valko, Eva L. Dyer
State-of-the-art methods for self-supervised learning (SSL) build representations by maximizing the similarity between different transformed "views" of a sample.
no code implementations • 1 Jan 2021 • Rares C Cristian, Max Dabagia, Christos Papadimitriou, Santosh Vempala
Here we hypothesize that (a) Brains employ synaptic plasticity rules that serve as proxies for GD; (b) These rules themselves can be learned by GD on the rule parameters; and (c) This process may be a missing ingredient for the development of ANNs that generalize well and are robust to adversarial perturbations.
2 code implementations • NeurIPS 2019 • John Lee, Max Dabagia, Eva L. Dyer, Christopher J. Rozell
Our results demonstrate that when clustered structure exists in datasets, and is consistent across trials or time points, a hierarchical alignment strategy that leverages such structure can provide significant improvements in cross-domain alignment.