Search Results for author: Mehdi Azabou

Found 10 papers, 7 papers with code

Learning signatures of decision making from many individuals playing the same game

no code implementations21 Feb 2023 Michael J Mendelson, Mehdi Azabou, Suma Jacob, Nicola Grissom, David Darrow, Becket Ebitz, Alexander Herman, Eva L. Dyer

In addition to predicting future choices, we show that our model can learn rich representations of human behavior over multiple timescales and provide signatures of differences in individuals.

Decision Making

MTNeuro: A Benchmark for Evaluating Representations of Brain Structure Across Multiple Levels of Abstraction

1 code implementation1 Jan 2023 Jorge Quesada, Lakshmi Sathidevi, Ran Liu, Nauman Ahad, Joy M. Jackson, Mehdi Azabou, Jingyun Xiao, Christopher Liding, Matthew Jin, Carolina Urzay, William Gray-Roncal, Erik C. Johnson, Eva L. Dyer

To bridge this gap, we introduce a new dataset, annotations, and multiple downstream tasks that provide diverse ways to readout information about brain structure and architecture from the same image.

Attribute Semantic Segmentation

Learning Behavior Representations Through Multi-Timescale Bootstrapping

no code implementations14 Jun 2022 Mehdi Azabou, Michael Mendelson, Maks Sorokin, Shantanu Thakoor, Nauman Ahad, Carolina Urzay, Eva L. Dyer

Natural behavior consists of dynamics that are both unpredictable, can switch suddenly, and unfold over many different timescales.

Disentanglement

Seeing the forest and the tree: Building representations of both individual and collective dynamics with transformers

1 code implementation10 Jun 2022 Ran Liu, Mehdi Azabou, Max Dabagia, Jingyun Xiao, Eva L. Dyer

By enabling flexible pre-training that can be transferred to neural recordings of different size and order, our work provides a first step towards creating a foundation model for neural decoding.

Time Series Time Series Analysis

Drop, Swap, and Generate: A Self-Supervised Approach for Generating Neural Activity

1 code implementation NeurIPS 2021 Ran Liu, Mehdi Azabou, Max Dabagia, Chi-Heng Lin, Mohammad Gheshlaghi Azar, Keith B. Hengen, Michal Valko, Eva L. Dyer

Our approach combines a generative modeling framework with an instance-specific alignment loss that tries to maximize the representational similarity between transformed views of the input (brain state).

Large-Scale Representation Learning on Graphs via Bootstrapping

3 code implementations ICLR 2022 Shantanu Thakoor, Corentin Tallec, Mohammad Gheshlaghi Azar, Mehdi Azabou, Eva L. Dyer, Rémi Munos, Petar Veličković, Michal Valko

To address these challenges, we introduce Bootstrapped Graph Latents (BGRL) - a graph representation learning method that learns by predicting alternative augmentations of the input.

Contrastive Learning Graph Representation Learning +1

Making transport more robust and interpretable by moving data through a small number of anchor points

1 code implementation21 Dec 2020 Chi-Heng Lin, Mehdi Azabou, Eva L. Dyer

Optimal transport (OT) is a widely used technique for distribution alignment, with applications throughout the machine learning, graphics, and vision communities.

Cannot find the paper you are looking for? You can Submit a new open access paper.