no code implementations • 10 Oct 2024 • Ran Liu, Wenrui Ma, Ellen Zippi, Hadi Pouransari, Jingyun Xiao, Chris Sandino, Behrooz Mahasseni, Juri Minxha, Erdrin Azemi, Eva L. Dyer, Ali Moin
In this work, we propose a novel objective for transformers that learn time series by re-interpreting them as temporal functions.
1 code implementation • 28 Aug 2023 • Ran Liu, Sahil Khose, Jingyun Xiao, Lakshmi Sathidevi, Keerthan Ramnath, Zsolt Kira, Eva L. Dyer
To address this challenge, we propose a novel approach for distribution-aware latent augmentation that leverages the relationships across samples to guide the augmentation procedure.
1 code implementation • 17 Aug 2023 • Mehdi Azabou, Venkataramana Ganesh, Shantanu Thakoor, Chi-Heng Lin, Lakshmi Sathidevi, Ran Liu, Michal Valko, Petar Veličković, Eva L. Dyer
Message passing neural networks have shown a lot of success on graph-structured data.
Ranked #1 on
Node Classification
on AMZ Comp
no code implementations • 15 Mar 2023 • Mehdi Azabou, Michael Mendelson, Nauman Ahad, Maks Sorokin, Shantanu Thakoor, Carolina Urzay, Eva L. Dyer
Natural behavior consists of dynamics that are complex and unpredictable, especially when trying to predict many steps into the future.
no code implementations • 21 Feb 2023 • Michael J Mendelson, Mehdi Azabou, Suma Jacob, Nicola Grissom, David Darrow, Becket Ebitz, Alexander Herman, Eva L. Dyer
In addition to predicting future choices, we show that our model can learn rich representations of human behavior over multiple timescales and provide signatures of differences in individuals.
1 code implementation • 1 Jan 2023 • Jorge Quesada, Lakshmi Sathidevi, Ran Liu, Nauman Ahad, Joy M. Jackson, Mehdi Azabou, Jingyun Xiao, Christopher Liding, Matthew Jin, Carolina Urzay, William Gray-Roncal, Erik C. Johnson, Eva L. Dyer
To bridge this gap, we introduce a new dataset, annotations, and multiple downstream tasks that provide diverse ways to readout information about brain structure and architecture from the same image.
1 code implementation • 10 Oct 2022 • Chi-Heng Lin, Chiraag Kaushik, Eva L. Dyer, Vidya Muthukumar
Data augmentation (DA) is a powerful workhorse for bolstering performance in modern machine learning.
no code implementations • 14 Jun 2022 • Mehdi Azabou, Michael Mendelson, Maks Sorokin, Shantanu Thakoor, Nauman Ahad, Carolina Urzay, Eva L. Dyer
Natural behavior consists of dynamics that are both unpredictable, can switch suddenly, and unfold over many different timescales.
1 code implementation • 10 Jun 2022 • Ran Liu, Mehdi Azabou, Max Dabagia, Jingyun Xiao, Eva L. Dyer
By enabling flexible pre-training that can be transferred to neural recordings of different size and order, our work provides a first step towards creating a foundation model for neural decoding.
no code implementations • 8 Feb 2022 • Nauman Ahad, Eva L. Dyer, Keith B. Hengen, Yao Xie, Mark A. Davenport
We present a novel change point detection framework that uses true change point instances as supervision for learning a ground metric such that Sinkhorn divergences can be then used in two-sample tests on sliding windows to detect change points in an online manner.
1 code implementation • NeurIPS 2021 • Ran Liu, Mehdi Azabou, Max Dabagia, Chi-Heng Lin, Mohammad Gheshlaghi Azar, Keith B. Hengen, Michal Valko, Eva L. Dyer
Our approach combines a generative modeling framework with an instance-specific alignment loss that tries to maximize the representational similarity between transformed views of the input (brain state).
2 code implementations • 9 Sep 2021 • Felix Pei, Joel Ye, David Zoltowski, Anqi Wu, Raeed H. Chowdhury, Hansem Sohn, Joseph E. O'Doherty, Krishna V. Shenoy, Matthew T. Kaufman, Mark Churchland, Mehrdad Jazayeri, Lee E. Miller, Jonathan Pillow, Il Memming Park, Eva L. Dyer, Chethan Pandarinath
We curate four datasets of neural spiking activity from cognitive, sensory, and motor areas to promote models that apply to the wide variety of activity seen across these areas.
1 code implementation • 19 Feb 2021 • Mehdi Azabou, Mohammad Gheshlaghi Azar, Ran Liu, Chi-Heng Lin, Erik C. Johnson, Kiran Bhaskaran-Nair, Max Dabagia, Bernardo Avila-Pires, Lindsey Kitchell, Keith B. Hengen, William Gray-Roncal, Michal Valko, Eva L. Dyer
State-of-the-art methods for self-supervised learning (SSL) build representations by maximizing the similarity between different transformed "views" of a sample.
4 code implementations • ICLR 2022 • Shantanu Thakoor, Corentin Tallec, Mohammad Gheshlaghi Azar, Mehdi Azabou, Eva L. Dyer, Rémi Munos, Petar Veličković, Michal Valko
To address these challenges, we introduce Bootstrapped Graph Latents (BGRL) - a graph representation learning method that learns by predicting alternative augmentations of the input.
1 code implementation • 21 Dec 2020 • Chi-Heng Lin, Mehdi Azabou, Eva L. Dyer
Optimal transport (OT) is a widely used technique for distribution alignment, with applications throughout the machine learning, graphics, and vision communities.
no code implementations • 4 Jun 2020 • Chi-Heng Lin, Joseph D. Miano, Eva L. Dyer
In this work, we propose a new algorithm for switch cost-aware optimization called Lazy Modular Bayesian Optimization (LaMBO).
2 code implementations • NeurIPS 2019 • John Lee, Max Dabagia, Eva L. Dyer, Christopher J. Rozell
Our results demonstrate that when clustered structure exists in datasets, and is consistent across trials or time points, a hierarchical alignment strategy that leverages such structure can provide significant improvements in cross-domain alignment.
no code implementations • 13 Apr 2016 • Eva L. Dyer, William Gray Roncal, Hugo L. Fernandes, Doga Gürsoy, Vincent De Andrade, Rafael Vescovi, Kamel Fezzaa, Xianghui Xiao, Joshua T. Vogelstein, Chris Jacobsen, Konrad P. Körding, Narayanan Kasthuri
Methods for resolving the 3D microstructure of the brain typically start by thinly slicing and staining the brain, and then imaging each individual section with visible light photons or electrons.
no code implementations • 19 May 2015 • Raajen Patel, Thomas A. Goldstein, Eva L. Dyer, Azalia Mirhoseini, Richard G. Baraniuk
Kernel matrices (e. g. Gram or similarity matrices) are essential for many state-of-the-art approaches to classification, clustering, and dimensionality reduction.
no code implementations • 4 May 2015 • Eva L. Dyer, Tom A. Goldstein, Raajen Patel, Konrad P. Kording, Richard G. Baraniuk
Classical approaches discover such structure by learning a basis that can efficiently express the collection.
1 code implementation • 27 Mar 2015 • Azalia Mirhoseini, Eva L. Dyer, Ebrahim. M. Songhori, Richard G. Baraniuk, Farinaz Koushanfar
This paper introduces RankMap, a platform-aware end-to-end framework for efficient execution of a broad class of iterative learning algorithms for massive and dense datasets.
no code implementations • 19 Mar 2013 • Eva L. Dyer, Aswin C. Sankaranarayanan, Richard G. Baraniuk
To learn a union of subspaces from a collection of data, sets of signals in the collection that belong to the same subspace must be identified in order to obtain accurate estimates of the subspace structures present in the data.