Search Results for author: Eva L. Dyer

Found 21 papers, 11 papers with code

LatentDR: Improving Model Generalization Through Sample-Aware Latent Degradation and Restoration

1 code implementation28 Aug 2023 Ran Liu, Sahil Khose, Jingyun Xiao, Lakshmi Sathidevi, Keerthan Ramnath, Zsolt Kira, Eva L. Dyer

To address this challenge, we propose a novel approach for distribution-aware latent augmentation that leverages the relationships across samples to guide the augmentation procedure.

Domain Generalization

Learning signatures of decision making from many individuals playing the same game

no code implementations21 Feb 2023 Michael J Mendelson, Mehdi Azabou, Suma Jacob, Nicola Grissom, David Darrow, Becket Ebitz, Alexander Herman, Eva L. Dyer

In addition to predicting future choices, we show that our model can learn rich representations of human behavior over multiple timescales and provide signatures of differences in individuals.

Decision Making

MTNeuro: A Benchmark for Evaluating Representations of Brain Structure Across Multiple Levels of Abstraction

1 code implementation1 Jan 2023 Jorge Quesada, Lakshmi Sathidevi, Ran Liu, Nauman Ahad, Joy M. Jackson, Mehdi Azabou, Jingyun Xiao, Christopher Liding, Matthew Jin, Carolina Urzay, William Gray-Roncal, Erik C. Johnson, Eva L. Dyer

To bridge this gap, we introduce a new dataset, annotations, and multiple downstream tasks that provide diverse ways to readout information about brain structure and architecture from the same image.

Attribute Semantic Segmentation

Learning Behavior Representations Through Multi-Timescale Bootstrapping

no code implementations14 Jun 2022 Mehdi Azabou, Michael Mendelson, Maks Sorokin, Shantanu Thakoor, Nauman Ahad, Carolina Urzay, Eva L. Dyer

Natural behavior consists of dynamics that are both unpredictable, can switch suddenly, and unfold over many different timescales.

Disentanglement

Seeing the forest and the tree: Building representations of both individual and collective dynamics with transformers

1 code implementation10 Jun 2022 Ran Liu, Mehdi Azabou, Max Dabagia, Jingyun Xiao, Eva L. Dyer

By enabling flexible pre-training that can be transferred to neural recordings of different size and order, our work provides a first step towards creating a foundation model for neural decoding.

Time Series Time Series Analysis

Learning Sinkhorn divergences for supervised change point detection

no code implementations8 Feb 2022 Nauman Ahad, Eva L. Dyer, Keith B. Hengen, Yao Xie, Mark A. Davenport

We present a novel change point detection framework that uses true change point instances as supervision for learning a ground metric such that Sinkhorn divergences can be then used in two-sample tests on sliding windows to detect change points in an online manner.

Change Detection Change Point Detection +1

Drop, Swap, and Generate: A Self-Supervised Approach for Generating Neural Activity

1 code implementation NeurIPS 2021 Ran Liu, Mehdi Azabou, Max Dabagia, Chi-Heng Lin, Mohammad Gheshlaghi Azar, Keith B. Hengen, Michal Valko, Eva L. Dyer

Our approach combines a generative modeling framework with an instance-specific alignment loss that tries to maximize the representational similarity between transformed views of the input (brain state).

Large-Scale Representation Learning on Graphs via Bootstrapping

3 code implementations ICLR 2022 Shantanu Thakoor, Corentin Tallec, Mohammad Gheshlaghi Azar, Mehdi Azabou, Eva L. Dyer, Rémi Munos, Petar Veličković, Michal Valko

To address these challenges, we introduce Bootstrapped Graph Latents (BGRL) - a graph representation learning method that learns by predicting alternative augmentations of the input.

Contrastive Learning Graph Representation Learning +1

Making transport more robust and interpretable by moving data through a small number of anchor points

1 code implementation21 Dec 2020 Chi-Heng Lin, Mehdi Azabou, Eva L. Dyer

Optimal transport (OT) is a widely used technique for distribution alignment, with applications throughout the machine learning, graphics, and vision communities.

Bayesian optimization for modular black-box systems with switching costs

no code implementations4 Jun 2020 Chi-Heng Lin, Joseph D. Miano, Eva L. Dyer

In this work, we propose a new algorithm for switch cost-aware optimization called Lazy Modular Bayesian Optimization (LaMBO).

Bayesian Optimization Image Segmentation +1

Hierarchical Optimal Transport for Multimodal Distribution Alignment

2 code implementations NeurIPS 2019 John Lee, Max Dabagia, Eva L. Dyer, Christopher J. Rozell

Our results demonstrate that when clustered structure exists in datasets, and is consistent across trials or time points, a hierarchical alignment strategy that leverages such structure can provide significant improvements in cross-domain alignment.

Quantifying mesoscale neuroanatomy using X-ray microtomography

no code implementations13 Apr 2016 Eva L. Dyer, William Gray Roncal, Hugo L. Fernandes, Doga Gürsoy, Vincent De Andrade, Rafael Vescovi, Kamel Fezzaa, Xianghui Xiao, Joshua T. Vogelstein, Chris Jacobsen, Konrad P. Körding, Narayanan Kasthuri

Methods for resolving the 3D microstructure of the brain typically start by thinly slicing and staining the brain, and then imaging each individual section with visible light photons or electrons.

oASIS: Adaptive Column Sampling for Kernel Matrix Approximation

no code implementations19 May 2015 Raajen Patel, Thomas A. Goldstein, Eva L. Dyer, Azalia Mirhoseini, Richard G. Baraniuk

Kernel matrices (e. g. Gram or similarity matrices) are essential for many state-of-the-art approaches to classification, clustering, and dimensionality reduction.

Clustering Dimensionality Reduction +1

RankMap: A Platform-Aware Framework for Distributed Learning from Dense Datasets

1 code implementation27 Mar 2015 Azalia Mirhoseini, Eva L. Dyer, Ebrahim. M. Songhori, Richard G. Baraniuk, Farinaz Koushanfar

This paper introduces RankMap, a platform-aware end-to-end framework for efficient execution of a broad class of iterative learning algorithms for massive and dense datasets.

Distributed Computing Scheduling

Greedy Feature Selection for Subspace Clustering

no code implementations19 Mar 2013 Eva L. Dyer, Aswin C. Sankaranarayanan, Richard G. Baraniuk

To learn a union of subspaces from a collection of data, sets of signals in the collection that belong to the same subspace must be identified in order to obtain accurate estimates of the subspace structures present in the data.

Clustering feature selection

Cannot find the paper you are looking for? You can Submit a new open access paper.