Search Results for author: Mehdi Azabou

Found 14 papers, 7 papers with code

Generalizable, real-time neural decoding with hybrid state-space models

no code implementations5 Jun 2025 Avery Hee-Woon Ryoo, Nanda H. Krishna, Ximeng Mao, Mehdi Azabou, Eva L. Dyer, Matthew G. Perich, Guillaume Lajoie

We evaluate POSSM's decoding performance and inference speed on intracortical decoding of monkey motor tasks, and show that it extends to clinical applications, namely handwriting and speech decoding in human subjects.

State Space Models

Neural Encoding and Decoding at Scale

no code implementations11 Apr 2025 Yizi Zhang, Yanchen Wang, Mehdi Azabou, Alexandre Andre, Zixuan Wang, Hanrui Lyu, The International Brain Laboratory, Eva Dyer, Liam Paninski, Cole Hurwitz

Altogether, our approach is a step towards a foundation model of the brain that enables seamless translation between neural activity and behavior.

Towards a "universal translator" for neural dynamics at single-cell, single-spike resolution

no code implementations19 Jul 2024 Yizi Zhang, Yanchen Wang, Donato Jimenez-Beneto, Zixuan Wang, Mehdi Azabou, Blake Richards, Olivier Winter, International Brain Laboratory, Eva Dyer, Liam Paninski, Cole Hurwitz

Neuroscience research has made immense progress over the last decade, but our understanding of the brain remains fragmented and piecemeal: the dream of probing an arbitrary brain region and automatically reading out the information encoded in its neural activity remains out of reach.

Activity Prediction Multi-Task Learning +1

GraphFM: A Scalable Framework for Multi-Graph Pretraining

no code implementations16 Jul 2024 Divyansha Lachi, Mehdi Azabou, Vinam Arora, Eva Dyer

We demonstrate the efficacy of our approach by training a model on 152 different graph datasets comprising over 7. 4 million nodes and 189 million edges, establishing the first set of scaling laws for multi-graph pretraining on datasets spanning many domains (e. g., molecules, citation and product graphs).

Node Classification

Learning signatures of decision making from many individuals playing the same game

no code implementations21 Feb 2023 Michael J Mendelson, Mehdi Azabou, Suma Jacob, Nicola Grissom, David Darrow, Becket Ebitz, Alexander Herman, Eva L. Dyer

In addition to predicting future choices, we show that our model can learn rich representations of human behavior over multiple timescales and provide signatures of differences in individuals.

Decision Making

MTNeuro: A Benchmark for Evaluating Representations of Brain Structure Across Multiple Levels of Abstraction

1 code implementation1 Jan 2023 Jorge Quesada, Lakshmi Sathidevi, Ran Liu, Nauman Ahad, Joy M. Jackson, Mehdi Azabou, Jingyun Xiao, Christopher Liding, Matthew Jin, Carolina Urzay, William Gray-Roncal, Erik C. Johnson, Eva L. Dyer

To bridge this gap, we introduce a new dataset, annotations, and multiple downstream tasks that provide diverse ways to readout information about brain structure and architecture from the same image.

Attribute Semantic Segmentation

Learning Behavior Representations Through Multi-Timescale Bootstrapping

no code implementations14 Jun 2022 Mehdi Azabou, Michael Mendelson, Maks Sorokin, Shantanu Thakoor, Nauman Ahad, Carolina Urzay, Eva L. Dyer

Natural behavior consists of dynamics that are both unpredictable, can switch suddenly, and unfold over many different timescales.

Disentanglement

Seeing the forest and the tree: Building representations of both individual and collective dynamics with transformers

1 code implementation10 Jun 2022 Ran Liu, Mehdi Azabou, Max Dabagia, Jingyun Xiao, Eva L. Dyer

By enabling flexible pre-training that can be transferred to neural recordings of different size and order, our work provides a first step towards creating a foundation model for neural decoding.

Time Series Time Series Analysis

Drop, Swap, and Generate: A Self-Supervised Approach for Generating Neural Activity

1 code implementation NeurIPS 2021 Ran Liu, Mehdi Azabou, Max Dabagia, Chi-Heng Lin, Mohammad Gheshlaghi Azar, Keith B. Hengen, Michal Valko, Eva L. Dyer

Our approach combines a generative modeling framework with an instance-specific alignment loss that tries to maximize the representational similarity between transformed views of the input (brain state).

Large-Scale Representation Learning on Graphs via Bootstrapping

4 code implementations ICLR 2022 Shantanu Thakoor, Corentin Tallec, Mohammad Gheshlaghi Azar, Mehdi Azabou, Eva L. Dyer, Rémi Munos, Petar Veličković, Michal Valko

To address these challenges, we introduce Bootstrapped Graph Latents (BGRL) - a graph representation learning method that learns by predicting alternative augmentations of the input.

Contrastive Learning Graph Representation Learning +1

Making transport more robust and interpretable by moving data through a small number of anchor points

1 code implementation21 Dec 2020 Chi-Heng Lin, Mehdi Azabou, Eva L. Dyer

Optimal transport (OT) is a widely used technique for distribution alignment, with applications throughout the machine learning, graphics, and vision communities.

Cannot find the paper you are looking for? You can Submit a new open access paper.